2025-06-02 00:00:06.577756 | Job console starting 2025-06-02 00:00:06.594004 | Updating git repos 2025-06-02 00:00:06.676359 | Cloning repos into workspace 2025-06-02 00:00:06.918827 | Restoring repo states 2025-06-02 00:00:06.981646 | Merging changes 2025-06-02 00:00:06.981676 | Checking out repos 2025-06-02 00:00:07.267463 | Preparing playbooks 2025-06-02 00:00:08.062086 | Running Ansible setup 2025-06-02 00:00:13.370934 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-02 00:00:14.436368 | 2025-06-02 00:00:14.436534 | PLAY [Base pre] 2025-06-02 00:00:14.455800 | 2025-06-02 00:00:14.455958 | TASK [Setup log path fact] 2025-06-02 00:00:14.476832 | orchestrator | ok 2025-06-02 00:00:14.496093 | 2025-06-02 00:00:14.496248 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 00:00:14.536799 | orchestrator | ok 2025-06-02 00:00:14.550152 | 2025-06-02 00:00:14.550291 | TASK [emit-job-header : Print job information] 2025-06-02 00:00:14.646884 | # Job Information 2025-06-02 00:00:14.647464 | Ansible Version: 2.16.14 2025-06-02 00:00:14.647554 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-06-02 00:00:14.647630 | Pipeline: periodic-midnight 2025-06-02 00:00:14.647659 | Executor: 521e9411259a 2025-06-02 00:00:14.647695 | Triggered by: https://github.com/osism/testbed 2025-06-02 00:00:14.647882 | Event ID: 49427694729a45dab0304304e71020b4 2025-06-02 00:00:14.657076 | 2025-06-02 00:00:14.657196 | LOOP [emit-job-header : Print node information] 2025-06-02 00:00:14.854285 | orchestrator | ok: 2025-06-02 00:00:14.855686 | orchestrator | # Node Information 2025-06-02 00:00:14.855760 | orchestrator | Inventory Hostname: orchestrator 2025-06-02 00:00:14.856166 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-02 00:00:14.856217 | orchestrator | Username: zuul-testbed01 2025-06-02 00:00:14.856243 | orchestrator | Distro: Debian 12.11 2025-06-02 00:00:14.856272 | orchestrator | Provider: static-testbed 2025-06-02 00:00:14.856295 | orchestrator | Region: 2025-06-02 00:00:14.856317 | orchestrator | Label: testbed-orchestrator 2025-06-02 00:00:14.856451 | orchestrator | Product Name: OpenStack Nova 2025-06-02 00:00:14.856479 | orchestrator | Interface IP: 81.163.193.140 2025-06-02 00:00:14.881444 | 2025-06-02 00:00:14.881629 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-02 00:00:15.684779 | orchestrator -> localhost | changed 2025-06-02 00:00:15.693115 | 2025-06-02 00:00:15.693241 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-02 00:00:17.169496 | orchestrator -> localhost | changed 2025-06-02 00:00:17.191150 | 2025-06-02 00:00:17.191432 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-02 00:00:17.461949 | orchestrator -> localhost | ok 2025-06-02 00:00:17.474016 | 2025-06-02 00:00:17.474152 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-02 00:00:17.493141 | orchestrator | ok 2025-06-02 00:00:17.508964 | orchestrator | included: /var/lib/zuul/builds/6866d38e26b6403fad245960ab6da0bc/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-02 00:00:17.516473 | 2025-06-02 00:00:17.516565 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-02 00:00:19.448722 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-02 00:00:19.449193 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/6866d38e26b6403fad245960ab6da0bc/work/6866d38e26b6403fad245960ab6da0bc_id_rsa 2025-06-02 00:00:19.449297 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/6866d38e26b6403fad245960ab6da0bc/work/6866d38e26b6403fad245960ab6da0bc_id_rsa.pub 2025-06-02 00:00:19.449366 | orchestrator -> localhost | The key fingerprint is: 2025-06-02 00:00:19.449426 | orchestrator -> localhost | SHA256:6/FLOV3502Ek2aKHsT2lYn1BOoVEbhy+X9Le6T+ZoR8 zuul-build-sshkey 2025-06-02 00:00:19.449482 | orchestrator -> localhost | The key's randomart image is: 2025-06-02 00:00:19.449554 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-02 00:00:19.449650 | orchestrator -> localhost | | o+.o | 2025-06-02 00:00:19.449751 | orchestrator -> localhost | | +.B | 2025-06-02 00:00:19.449812 | orchestrator -> localhost | | . @ = | 2025-06-02 00:00:19.449862 | orchestrator -> localhost | | O Oo.| 2025-06-02 00:00:19.449912 | orchestrator -> localhost | | S * B++o| 2025-06-02 00:00:19.449971 | orchestrator -> localhost | | ..oo.==*| 2025-06-02 00:00:19.450021 | orchestrator -> localhost | | o + . .EO| 2025-06-02 00:00:19.450071 | orchestrator -> localhost | | . + . ..+o| 2025-06-02 00:00:19.450124 | orchestrator -> localhost | | . o. .o+| 2025-06-02 00:00:19.450177 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-02 00:00:19.450310 | orchestrator -> localhost | ok: Runtime: 0:00:01.493978 2025-06-02 00:00:19.465915 | 2025-06-02 00:00:19.466051 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-02 00:00:19.498801 | orchestrator | ok 2025-06-02 00:00:19.510475 | orchestrator | included: /var/lib/zuul/builds/6866d38e26b6403fad245960ab6da0bc/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-02 00:00:19.519578 | 2025-06-02 00:00:19.519682 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-02 00:00:19.542759 | orchestrator | skipping: Conditional result was False 2025-06-02 00:00:19.564911 | 2025-06-02 00:00:19.565022 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-02 00:00:20.166418 | orchestrator | changed 2025-06-02 00:00:20.175023 | 2025-06-02 00:00:20.175150 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-02 00:00:20.462802 | orchestrator | ok 2025-06-02 00:00:20.470742 | 2025-06-02 00:00:20.471508 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-02 00:00:21.048160 | orchestrator | ok 2025-06-02 00:00:21.064283 | 2025-06-02 00:00:21.064493 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-02 00:00:21.486775 | orchestrator | ok 2025-06-02 00:00:21.495217 | 2025-06-02 00:00:21.495333 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-02 00:00:21.511359 | orchestrator | skipping: Conditional result was False 2025-06-02 00:00:21.527161 | 2025-06-02 00:00:21.527342 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-02 00:00:21.979784 | orchestrator -> localhost | changed 2025-06-02 00:00:22.005525 | 2025-06-02 00:00:22.005690 | TASK [add-build-sshkey : Add back temp key] 2025-06-02 00:00:22.349682 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/6866d38e26b6403fad245960ab6da0bc/work/6866d38e26b6403fad245960ab6da0bc_id_rsa (zuul-build-sshkey) 2025-06-02 00:00:22.350241 | orchestrator -> localhost | ok: Runtime: 0:00:00.022939 2025-06-02 00:00:22.366495 | 2025-06-02 00:00:22.366676 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-02 00:00:22.873740 | orchestrator | ok 2025-06-02 00:00:22.881146 | 2025-06-02 00:00:22.881247 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-02 00:00:22.914529 | orchestrator | skipping: Conditional result was False 2025-06-02 00:00:22.962709 | 2025-06-02 00:00:22.962815 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-02 00:00:23.380994 | orchestrator | ok 2025-06-02 00:00:23.394027 | 2025-06-02 00:00:23.394130 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-02 00:00:23.423466 | orchestrator | ok 2025-06-02 00:00:23.430781 | 2025-06-02 00:00:23.430914 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-02 00:00:23.728541 | orchestrator -> localhost | ok 2025-06-02 00:00:23.735845 | 2025-06-02 00:00:23.735940 | TASK [validate-host : Collect information about the host] 2025-06-02 00:00:24.986387 | orchestrator | ok 2025-06-02 00:00:24.999956 | 2025-06-02 00:00:25.000074 | TASK [validate-host : Sanitize hostname] 2025-06-02 00:00:25.047617 | orchestrator | ok 2025-06-02 00:00:25.052806 | 2025-06-02 00:00:25.052899 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-02 00:00:25.665798 | orchestrator -> localhost | changed 2025-06-02 00:00:25.677531 | 2025-06-02 00:00:25.677701 | TASK [validate-host : Collect information about zuul worker] 2025-06-02 00:00:26.213008 | orchestrator | ok 2025-06-02 00:00:26.222452 | 2025-06-02 00:00:26.222647 | TASK [validate-host : Write out all zuul information for each host] 2025-06-02 00:00:27.165120 | orchestrator -> localhost | changed 2025-06-02 00:00:27.186624 | 2025-06-02 00:00:27.186804 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-02 00:00:27.499299 | orchestrator | ok 2025-06-02 00:00:27.511175 | 2025-06-02 00:00:27.513393 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-02 00:00:47.361620 | orchestrator | changed: 2025-06-02 00:00:47.361807 | orchestrator | .d..t...... src/ 2025-06-02 00:00:47.361842 | orchestrator | .d..t...... src/github.com/ 2025-06-02 00:00:47.361866 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-02 00:00:47.361888 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-02 00:00:47.361908 | orchestrator | RedHat.yml 2025-06-02 00:00:47.372500 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-02 00:00:47.372518 | orchestrator | RedHat.yml 2025-06-02 00:00:47.372570 | orchestrator | = 1.53.0"... 2025-06-02 00:01:01.313623 | orchestrator | 00:01:01.313 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-06-02 00:01:02.396176 | orchestrator | 00:01:02.395 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-02 00:01:03.300395 | orchestrator | 00:01:03.300 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 00:01:04.834403 | orchestrator | 00:01:04.834 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-02 00:01:06.053631 | orchestrator | 00:01:06.053 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-02 00:01:07.347589 | orchestrator | 00:01:07.347 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-02 00:01:08.448644 | orchestrator | 00:01:08.448 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 00:01:08.448788 | orchestrator | 00:01:08.448 STDOUT terraform: Providers are signed by their developers. 2025-06-02 00:01:08.448825 | orchestrator | 00:01:08.448 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-02 00:01:08.448837 | orchestrator | 00:01:08.448 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-02 00:01:08.448851 | orchestrator | 00:01:08.448 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-02 00:01:08.448984 | orchestrator | 00:01:08.448 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-02 00:01:08.449096 | orchestrator | 00:01:08.448 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-02 00:01:08.449153 | orchestrator | 00:01:08.449 STDOUT terraform: you run "tofu init" in the future. 2025-06-02 00:01:08.449252 | orchestrator | 00:01:08.449 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-02 00:01:08.449354 | orchestrator | 00:01:08.449 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-02 00:01:08.449453 | orchestrator | 00:01:08.449 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-02 00:01:08.449494 | orchestrator | 00:01:08.449 STDOUT terraform: should now work. 2025-06-02 00:01:08.449610 | orchestrator | 00:01:08.449 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-02 00:01:08.449695 | orchestrator | 00:01:08.449 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-02 00:01:08.450167 | orchestrator | 00:01:08.449 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-02 00:01:08.639689 | orchestrator | 00:01:08.639 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-06-02 00:01:08.876421 | orchestrator | 00:01:08.876 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-02 00:01:08.876552 | orchestrator | 00:01:08.876 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-02 00:01:08.876571 | orchestrator | 00:01:08.876 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-02 00:01:08.876584 | orchestrator | 00:01:08.876 STDOUT terraform: for this configuration. 2025-06-02 00:01:09.116622 | orchestrator | 00:01:09.116 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-06-02 00:01:09.225571 | orchestrator | 00:01:09.225 STDOUT terraform: ci.auto.tfvars 2025-06-02 00:01:09.232065 | orchestrator | 00:01:09.230 STDOUT terraform: default_custom.tf 2025-06-02 00:01:09.448794 | orchestrator | 00:01:09.448 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-06-02 00:01:10.484325 | orchestrator | 00:01:10.484 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-02 00:01:11.021027 | orchestrator | 00:01:11.020 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-02 00:01:11.244054 | orchestrator | 00:01:11.243 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-02 00:01:11.244130 | orchestrator | 00:01:11.244 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-02 00:01:11.244140 | orchestrator | 00:01:11.244 STDOUT terraform:  + create 2025-06-02 00:01:11.244208 | orchestrator | 00:01:11.244 STDOUT terraform:  <= read (data resources) 2025-06-02 00:01:11.244264 | orchestrator | 00:01:11.244 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-02 00:01:11.244364 | orchestrator | 00:01:11.244 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-02 00:01:11.244419 | orchestrator | 00:01:11.244 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 00:01:11.244474 | orchestrator | 00:01:11.244 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-02 00:01:11.244527 | orchestrator | 00:01:11.244 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 00:01:11.244584 | orchestrator | 00:01:11.244 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 00:01:11.244636 | orchestrator | 00:01:11.244 STDOUT terraform:  + file = (known after apply) 2025-06-02 00:01:11.244691 | orchestrator | 00:01:11.244 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.244745 | orchestrator | 00:01:11.244 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.244816 | orchestrator | 00:01:11.244 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 00:01:11.244868 | orchestrator | 00:01:11.244 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 00:01:11.244905 | orchestrator | 00:01:11.244 STDOUT terraform:  + most_recent = true 2025-06-02 00:01:11.244959 | orchestrator | 00:01:11.244 STDOUT terraform:  + name = (known after apply) 2025-06-02 00:01:11.245008 | orchestrator | 00:01:11.244 STDOUT terraform:  + protected = (known after apply) 2025-06-02 00:01:11.245060 | orchestrator | 00:01:11.245 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.245111 | orchestrator | 00:01:11.245 STDOUT terraform:  + schema = (known after apply) 2025-06-02 00:01:11.245163 | orchestrator | 00:01:11.245 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 00:01:11.245220 | orchestrator | 00:01:11.245 STDOUT terraform:  + tags = (known after apply) 2025-06-02 00:01:11.245274 | orchestrator | 00:01:11.245 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 00:01:11.245298 | orchestrator | 00:01:11.245 STDOUT terraform:  } 2025-06-02 00:01:11.245381 | orchestrator | 00:01:11.245 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-02 00:01:11.245433 | orchestrator | 00:01:11.245 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 00:01:11.245499 | orchestrator | 00:01:11.245 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-02 00:01:11.245546 | orchestrator | 00:01:11.245 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 00:01:11.245597 | orchestrator | 00:01:11.245 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 00:01:11.245648 | orchestrator | 00:01:11.245 STDOUT terraform:  + file = (known after apply) 2025-06-02 00:01:11.245699 | orchestrator | 00:01:11.245 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.245790 | orchestrator | 00:01:11.245 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.245844 | orchestrator | 00:01:11.245 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 00:01:11.245897 | orchestrator | 00:01:11.245 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 00:01:11.245936 | orchestrator | 00:01:11.245 STDOUT terraform:  + most_recent = true 2025-06-02 00:01:11.245989 | orchestrator | 00:01:11.245 STDOUT terraform:  + name = (known after apply) 2025-06-02 00:01:11.246066 | orchestrator | 00:01:11.245 STDOUT terraform:  + protected = (known after apply) 2025-06-02 00:01:11.246119 | orchestrator | 00:01:11.246 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.246170 | orchestrator | 00:01:11.246 STDOUT terraform:  + schema = (known after apply) 2025-06-02 00:01:11.246225 | orchestrator | 00:01:11.246 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 00:01:11.246279 | orchestrator | 00:01:11.246 STDOUT terraform:  + tags = (known after apply) 2025-06-02 00:01:11.246332 | orchestrator | 00:01:11.246 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 00:01:11.246357 | orchestrator | 00:01:11.246 STDOUT terraform:  } 2025-06-02 00:01:11.246411 | orchestrator | 00:01:11.246 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-02 00:01:11.246492 | orchestrator | 00:01:11.246 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-02 00:01:11.246572 | orchestrator | 00:01:11.246 STDOUT terraform:  + content = (known after apply) 2025-06-02 00:01:11.246636 | orchestrator | 00:01:11.246 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 00:01:11.246700 | orchestrator | 00:01:11.246 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 00:01:11.246783 | orchestrator | 00:01:11.246 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 00:01:11.246847 | orchestrator | 00:01:11.246 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 00:01:11.246910 | orchestrator | 00:01:11.246 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 00:01:11.246970 | orchestrator | 00:01:11.246 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 00:01:11.247031 | orchestrator | 00:01:11.246 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 00:01:11.247124 | orchestrator | 00:01:11.247 STDOUT terraform:  + file_permission = "0644" 2025-06-02 00:01:11.247191 | orchestrator | 00:01:11.247 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-02 00:01:11.247260 | orchestrator | 00:01:11.247 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.247295 | orchestrator | 00:01:11.247 STDOUT terraform:  } 2025-06-02 00:01:11.247512 | orchestrator | 00:01:11.247 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-02 00:01:11.247570 | orchestrator | 00:01:11.247 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-02 00:01:11.247637 | orchestrator | 00:01:11.247 STDOUT terraform:  + content = (known after apply) 2025-06-02 00:01:11.247698 | orchestrator | 00:01:11.247 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 00:01:11.247792 | orchestrator | 00:01:11.247 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 00:01:11.247864 | orchestrator | 00:01:11.247 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 00:01:11.247934 | orchestrator | 00:01:11.247 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 00:01:11.247994 | orchestrator | 00:01:11.247 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 00:01:11.248065 | orchestrator | 00:01:11.247 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 00:01:11.248108 | orchestrator | 00:01:11.248 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 00:01:11.248152 | orchestrator | 00:01:11.248 STDOUT terraform:  + file_permission = "0644" 2025-06-02 00:01:11.248208 | orchestrator | 00:01:11.248 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-02 00:01:11.248276 | orchestrator | 00:01:11.248 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.248306 | orchestrator | 00:01:11.248 STDOUT terraform:  } 2025-06-02 00:01:11.248351 | orchestrator | 00:01:11.248 STDOUT terraform:  # local_file.inventory will be created 2025-06-02 00:01:11.248393 | orchestrator | 00:01:11.248 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-02 00:01:11.248458 | orchestrator | 00:01:11.248 STDOUT terraform:  + content = (known after apply) 2025-06-02 00:01:11.248519 | orchestrator | 00:01:11.248 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 00:01:11.248582 | orchestrator | 00:01:11.248 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 00:01:11.248645 | orchestrator | 00:01:11.248 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 00:01:11.248734 | orchestrator | 00:01:11.248 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 00:01:11.248830 | orchestrator | 00:01:11.248 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 00:01:11.248897 | orchestrator | 00:01:11.248 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 00:01:11.248942 | orchestrator | 00:01:11.248 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 00:01:11.248986 | orchestrator | 00:01:11.248 STDOUT terraform:  + file_permission = "0644" 2025-06-02 00:01:11.249050 | orchestrator | 00:01:11.248 STDOUT terraform:  + filename = "inventory.ci" 2025-06-02 00:01:11.249118 | orchestrator | 00:01:11.249 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.249141 | orchestrator | 00:01:11.249 STDOUT terraform:  } 2025-06-02 00:01:11.249192 | orchestrator | 00:01:11.249 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-02 00:01:11.249245 | orchestrator | 00:01:11.249 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-02 00:01:11.249300 | orchestrator | 00:01:11.249 STDOUT terraform:  + content = (sensitive value) 2025-06-02 00:01:11.249362 | orchestrator | 00:01:11.249 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 00:01:11.249426 | orchestrator | 00:01:11.249 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 00:01:11.249489 | orchestrator | 00:01:11.249 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 00:01:11.249551 | orchestrator | 00:01:11.249 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 00:01:11.249612 | orchestrator | 00:01:11.249 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 00:01:11.249685 | orchestrator | 00:01:11.249 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 00:01:11.249728 | orchestrator | 00:01:11.249 STDOUT terraform:  + directory_permission = "0700" 2025-06-02 00:01:11.249789 | orchestrator | 00:01:11.249 STDOUT terraform:  + file_permission = "0600" 2025-06-02 00:01:11.249840 | orchestrator | 00:01:11.249 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-02 00:01:11.249905 | orchestrator | 00:01:11.249 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.249926 | orchestrator | 00:01:11.249 STDOUT terraform:  } 2025-06-02 00:01:11.249986 | orchestrator | 00:01:11.249 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-02 00:01:11.250064 | orchestrator | 00:01:11.249 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-02 00:01:11.250100 | orchestrator | 00:01:11.250 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.250124 | orchestrator | 00:01:11.250 STDOUT terraform:  } 2025-06-02 00:01:11.250224 | orchestrator | 00:01:11.250 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-02 00:01:11.250308 | orchestrator | 00:01:11.250 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-02 00:01:11.250406 | orchestrator | 00:01:11.250 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.250451 | orchestrator | 00:01:11.250 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.250520 | orchestrator | 00:01:11.250 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.250581 | orchestrator | 00:01:11.250 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.250645 | orchestrator | 00:01:11.250 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.250725 | orchestrator | 00:01:11.250 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-02 00:01:11.250819 | orchestrator | 00:01:11.250 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.250851 | orchestrator | 00:01:11.250 STDOUT terraform:  + size = 80 2025-06-02 00:01:11.250893 | orchestrator | 00:01:11.250 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.250936 | orchestrator | 00:01:11.250 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.250960 | orchestrator | 00:01:11.250 STDOUT terraform:  } 2025-06-02 00:01:11.251046 | orchestrator | 00:01:11.250 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-02 00:01:11.251125 | orchestrator | 00:01:11.251 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 00:01:11.251189 | orchestrator | 00:01:11.251 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.251231 | orchestrator | 00:01:11.251 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.251295 | orchestrator | 00:01:11.251 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.251382 | orchestrator | 00:01:11.251 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.251447 | orchestrator | 00:01:11.251 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.251534 | orchestrator | 00:01:11.251 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-02 00:01:11.251594 | orchestrator | 00:01:11.251 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.251639 | orchestrator | 00:01:11.251 STDOUT terraform:  + size = 80 2025-06-02 00:01:11.251684 | orchestrator | 00:01:11.251 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.251728 | orchestrator | 00:01:11.251 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.251771 | orchestrator | 00:01:11.251 STDOUT terraform:  } 2025-06-02 00:01:11.251861 | orchestrator | 00:01:11.251 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-02 00:01:11.251942 | orchestrator | 00:01:11.251 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 00:01:11.252013 | orchestrator | 00:01:11.251 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.252048 | orchestrator | 00:01:11.252 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.252112 | orchestrator | 00:01:11.252 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.252178 | orchestrator | 00:01:11.252 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.252267 | orchestrator | 00:01:11.252 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.252383 | orchestrator | 00:01:11.252 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-02 00:01:11.252492 | orchestrator | 00:01:11.252 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.252550 | orchestrator | 00:01:11.252 STDOUT terraform:  + size = 80 2025-06-02 00:01:11.252614 | orchestrator | 00:01:11.252 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.252694 | orchestrator | 00:01:11.252 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.252729 | orchestrator | 00:01:11.252 STDOUT terraform:  } 2025-06-02 00:01:11.252937 | orchestrator | 00:01:11.252 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-02 00:01:11.253073 | orchestrator | 00:01:11.252 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 00:01:11.253175 | orchestrator | 00:01:11.253 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.253250 | orchestrator | 00:01:11.253 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.253369 | orchestrator | 00:01:11.253 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.253465 | orchestrator | 00:01:11.253 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.253564 | orchestrator | 00:01:11.253 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.253690 | orchestrator | 00:01:11.253 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-02 00:01:11.253853 | orchestrator | 00:01:11.253 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.253912 | orchestrator | 00:01:11.253 STDOUT terraform:  + size = 80 2025-06-02 00:01:11.253989 | orchestrator | 00:01:11.253 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.254085 | orchestrator | 00:01:11.253 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.254120 | orchestrator | 00:01:11.254 STDOUT terraform:  } 2025-06-02 00:01:11.254260 | orchestrator | 00:01:11.254 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-02 00:01:11.254391 | orchestrator | 00:01:11.254 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 00:01:11.254548 | orchestrator | 00:01:11.254 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.254617 | orchestrator | 00:01:11.254 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.254718 | orchestrator | 00:01:11.254 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.254887 | orchestrator | 00:01:11.254 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.254988 | orchestrator | 00:01:11.254 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.255108 | orchestrator | 00:01:11.254 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-02 00:01:11.255196 | orchestrator | 00:01:11.255 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.255263 | orchestrator | 00:01:11.255 STDOUT terraform:  + size = 80 2025-06-02 00:01:11.255326 | orchestrator | 00:01:11.255 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.255390 | orchestrator | 00:01:11.255 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.255426 | orchestrator | 00:01:11.255 STDOUT terraform:  } 2025-06-02 00:01:11.255557 | orchestrator | 00:01:11.255 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-02 00:01:11.255724 | orchestrator | 00:01:11.255 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 00:01:11.255875 | orchestrator | 00:01:11.255 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.255941 | orchestrator | 00:01:11.255 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.256038 | orchestrator | 00:01:11.255 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.256129 | orchestrator | 00:01:11.256 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.256234 | orchestrator | 00:01:11.256 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.256355 | orchestrator | 00:01:11.256 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-02 00:01:11.256451 | orchestrator | 00:01:11.256 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.256507 | orchestrator | 00:01:11.256 STDOUT terraform:  + size = 80 2025-06-02 00:01:11.256556 | orchestrator | 00:01:11.256 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.256585 | orchestrator | 00:01:11.256 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.256610 | orchestrator | 00:01:11.256 STDOUT terraform:  } 2025-06-02 00:01:11.256723 | orchestrator | 00:01:11.256 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-02 00:01:11.256858 | orchestrator | 00:01:11.256 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 00:01:11.256978 | orchestrator | 00:01:11.256 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.257016 | orchestrator | 00:01:11.256 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.257072 | orchestrator | 00:01:11.257 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.257182 | orchestrator | 00:01:11.257 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.257249 | orchestrator | 00:01:11.257 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.257320 | orchestrator | 00:01:11.257 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-02 00:01:11.257376 | orchestrator | 00:01:11.257 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.257405 | orchestrator | 00:01:11.257 STDOUT terraform:  + size = 80 2025-06-02 00:01:11.257440 | orchestrator | 00:01:11.257 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.257475 | orchestrator | 00:01:11.257 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.257495 | orchestrator | 00:01:11.257 STDOUT terraform:  } 2025-06-02 00:01:11.257561 | orchestrator | 00:01:11.257 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-02 00:01:11.257624 | orchestrator | 00:01:11.257 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 00:01:11.257689 | orchestrator | 00:01:11.257 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.257729 | orchestrator | 00:01:11.257 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.257822 | orchestrator | 00:01:11.257 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.257886 | orchestrator | 00:01:11.257 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.257944 | orchestrator | 00:01:11.257 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-02 00:01:11.257997 | orchestrator | 00:01:11.257 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.258054 | orchestrator | 00:01:11.257 STDOUT terraform:  + size = 20 2025-06-02 00:01:11.258091 | orchestrator | 00:01:11.258 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.258134 | orchestrator | 00:01:11.258 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.258157 | orchestrator | 00:01:11.258 STDOUT terraform:  } 2025-06-02 00:01:11.258244 | orchestrator | 00:01:11.258 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-02 00:01:11.258350 | orchestrator | 00:01:11.258 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 00:01:11.258417 | orchestrator | 00:01:11.258 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.258453 | orchestrator | 00:01:11.258 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.258519 | orchestrator | 00:01:11.258 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.258574 | orchestrator | 00:01:11.258 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.258630 | orchestrator | 00:01:11.258 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-02 00:01:11.258682 | orchestrator | 00:01:11.258 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.258715 | orchestrator | 00:01:11.258 STDOUT terraform:  + size = 20 2025-06-02 00:01:11.258795 | orchestrator | 00:01:11.258 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.258807 | orchestrator | 00:01:11.258 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.258812 | orchestrator | 00:01:11.258 STDOUT terraform:  } 2025-06-02 00:01:11.258879 | orchestrator | 00:01:11.258 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-02 00:01:11.258944 | orchestrator | 00:01:11.258 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 00:01:11.259025 | orchestrator | 00:01:11.258 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.259079 | orchestrator | 00:01:11.259 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.259164 | orchestrator | 00:01:11.259 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.259221 | orchestrator | 00:01:11.259 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.259298 | orchestrator | 00:01:11.259 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-02 00:01:11.259357 | orchestrator | 00:01:11.259 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.259386 | orchestrator | 00:01:11.259 STDOUT terraform:  + size = 20 2025-06-02 00:01:11.259426 | orchestrator | 00:01:11.259 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.259463 | orchestrator | 00:01:11.259 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.259481 | orchestrator | 00:01:11.259 STDOUT terraform:  } 2025-06-02 00:01:11.259549 | orchestrator | 00:01:11.259 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-02 00:01:11.259615 | orchestrator | 00:01:11.259 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 00:01:11.259679 | orchestrator | 00:01:11.259 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.259719 | orchestrator | 00:01:11.259 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.259817 | orchestrator | 00:01:11.259 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.259872 | orchestrator | 00:01:11.259 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.259931 | orchestrator | 00:01:11.259 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-02 00:01:11.259984 | orchestrator | 00:01:11.259 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.260014 | orchestrator | 00:01:11.259 STDOUT terraform:  + size = 20 2025-06-02 00:01:11.260054 | orchestrator | 00:01:11.260 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.260085 | orchestrator | 00:01:11.260 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.260103 | orchestrator | 00:01:11.260 STDOUT terraform:  } 2025-06-02 00:01:11.260163 | orchestrator | 00:01:11.260 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-02 00:01:11.260219 | orchestrator | 00:01:11.260 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 00:01:11.260265 | orchestrator | 00:01:11.260 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.260295 | orchestrator | 00:01:11.260 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.260343 | orchestrator | 00:01:11.260 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.260389 | orchestrator | 00:01:11.260 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.260440 | orchestrator | 00:01:11.260 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-02 00:01:11.260488 | orchestrator | 00:01:11.260 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.260515 | orchestrator | 00:01:11.260 STDOUT terraform:  + size = 20 2025-06-02 00:01:11.260552 | orchestrator | 00:01:11.260 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.260584 | orchestrator | 00:01:11.260 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.260597 | orchestrator | 00:01:11.260 STDOUT terraform:  } 2025-06-02 00:01:11.260656 | orchestrator | 00:01:11.260 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-02 00:01:11.260717 | orchestrator | 00:01:11.260 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 00:01:11.260797 | orchestrator | 00:01:11.260 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.260825 | orchestrator | 00:01:11.260 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.260873 | orchestrator | 00:01:11.260 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.260919 | orchestrator | 00:01:11.260 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.260970 | orchestrator | 00:01:11.260 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-02 00:01:11.261027 | orchestrator | 00:01:11.260 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.261068 | orchestrator | 00:01:11.261 STDOUT terraform:  + size = 20 2025-06-02 00:01:11.261122 | orchestrator | 00:01:11.261 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.261161 | orchestrator | 00:01:11.261 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.261178 | orchestrator | 00:01:11.261 STDOUT terraform:  } 2025-06-02 00:01:11.261238 | orchestrator | 00:01:11.261 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-02 00:01:11.261297 | orchestrator | 00:01:11.261 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 00:01:11.261343 | orchestrator | 00:01:11.261 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.261377 | orchestrator | 00:01:11.261 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.261424 | orchestrator | 00:01:11.261 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.261470 | orchestrator | 00:01:11.261 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.261520 | orchestrator | 00:01:11.261 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-02 00:01:11.261567 | orchestrator | 00:01:11.261 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.261594 | orchestrator | 00:01:11.261 STDOUT terraform:  + size = 20 2025-06-02 00:01:11.261636 | orchestrator | 00:01:11.261 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.261648 | orchestrator | 00:01:11.261 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.261668 | orchestrator | 00:01:11.261 STDOUT terraform:  } 2025-06-02 00:01:11.261727 | orchestrator | 00:01:11.261 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-02 00:01:11.261810 | orchestrator | 00:01:11.261 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 00:01:11.261866 | orchestrator | 00:01:11.261 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.261907 | orchestrator | 00:01:11.261 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.261956 | orchestrator | 00:01:11.261 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.262006 | orchestrator | 00:01:11.261 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.262082 | orchestrator | 00:01:11.262 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-02 00:01:11.262130 | orchestrator | 00:01:11.262 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.262167 | orchestrator | 00:01:11.262 STDOUT terraform:  + size = 20 2025-06-02 00:01:11.262199 | orchestrator | 00:01:11.262 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.262232 | orchestrator | 00:01:11.262 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.262252 | orchestrator | 00:01:11.262 STDOUT terraform:  } 2025-06-02 00:01:11.262309 | orchestrator | 00:01:11.262 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-02 00:01:11.262366 | orchestrator | 00:01:11.262 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 00:01:11.262415 | orchestrator | 00:01:11.262 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 00:01:11.262447 | orchestrator | 00:01:11.262 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.262495 | orchestrator | 00:01:11.262 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.262542 | orchestrator | 00:01:11.262 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 00:01:11.262592 | orchestrator | 00:01:11.262 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-02 00:01:11.262639 | orchestrator | 00:01:11.262 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.262665 | orchestrator | 00:01:11.262 STDOUT terraform:  + size = 20 2025-06-02 00:01:11.262698 | orchestrator | 00:01:11.262 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 00:01:11.262738 | orchestrator | 00:01:11.262 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 00:01:11.262799 | orchestrator | 00:01:11.262 STDOUT terraform:  } 2025-06-02 00:01:11.262882 | orchestrator | 00:01:11.262 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-02 00:01:11.262940 | orchestrator | 00:01:11.262 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-02 00:01:11.262988 | orchestrator | 00:01:11.262 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 00:01:11.263034 | orchestrator | 00:01:11.262 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 00:01:11.263103 | orchestrator | 00:01:11.263 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 00:01:11.263150 | orchestrator | 00:01:11.263 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.263183 | orchestrator | 00:01:11.263 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.263212 | orchestrator | 00:01:11.263 STDOUT terraform:  + config_drive = true 2025-06-02 00:01:11.263273 | orchestrator | 00:01:11.263 STDOUT terraform:  + created = (known after apply) 2025-06-02 00:01:11.263339 | orchestrator | 00:01:11.263 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 00:01:11.263398 | orchestrator | 00:01:11.263 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-02 00:01:11.263445 | orchestrator | 00:01:11.263 STDOUT terraform:  + force_delete = false 2025-06-02 00:01:11.263519 | orchestrator | 00:01:11.263 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 00:01:11.263591 | orchestrator | 00:01:11.263 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.263677 | orchestrator | 00:01:11.263 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.263718 | orchestrator | 00:01:11.263 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 00:01:11.263771 | orchestrator | 00:01:11.263 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 00:01:11.263804 | orchestrator | 00:01:11.263 STDOUT terraform:  + name = "testbed-manager" 2025-06-02 00:01:11.263852 | orchestrator | 00:01:11.263 STDOUT terraform:  + power_state = "active" 2025-06-02 00:01:11.263915 | orchestrator | 00:01:11.263 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.263975 | orchestrator | 00:01:11.263 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 00:01:11.264021 | orchestrator | 00:01:11.263 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 00:01:11.264087 | orchestrator | 00:01:11.264 STDOUT terraform:  + updated = (known after apply) 2025-06-02 00:01:11.264153 | orchestrator | 00:01:11.264 STDOUT terraform:  + user_data = (known after apply) 2025-06-02 00:01:11.264186 | orchestrator | 00:01:11.264 STDOUT terraform:  + block_device { 2025-06-02 00:01:11.264235 | orchestrator | 00:01:11.264 STDOUT terraform:  + boot_index = 0 2025-06-02 00:01:11.264291 | orchestrator | 00:01:11.264 STDOUT terraform:  + delete_on_termination = false 2025-06-02 00:01:11.264360 | orchestrator | 00:01:11.264 STDOUT terraform:  + destination_type = "volume" 2025-06-02 00:01:11.264401 | orchestrator | 00:01:11.264 STDOUT terraform:  + multiattach = false 2025-06-02 00:01:11.264438 | orchestrator | 00:01:11.264 STDOUT terraform:  + source_type = "volume" 2025-06-02 00:01:11.264487 | orchestrator | 00:01:11.264 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.264505 | orchestrator | 00:01:11.264 STDOUT terraform:  } 2025-06-02 00:01:11.264524 | orchestrator | 00:01:11.264 STDOUT terraform:  + network { 2025-06-02 00:01:11.264549 | orchestrator | 00:01:11.264 STDOUT terraform:  + access_network = false 2025-06-02 00:01:11.264587 | orchestrator | 00:01:11.264 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 00:01:11.264626 | orchestrator | 00:01:11.264 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 00:01:11.264668 | orchestrator | 00:01:11.264 STDOUT terraform:  + mac = (known after apply) 2025-06-02 00:01:11.264707 | orchestrator | 00:01:11.264 STDOUT terraform:  + name = (known after apply) 2025-06-02 00:01:11.264767 | orchestrator | 00:01:11.264 STDOUT terraform:  + port = (known after apply) 2025-06-02 00:01:11.264801 | orchestrator | 00:01:11.264 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.264816 | orchestrator | 00:01:11.264 STDOUT terraform:  } 2025-06-02 00:01:11.264840 | orchestrator | 00:01:11.264 STDOUT terraform:  } 2025-06-02 00:01:11.264895 | orchestrator | 00:01:11.264 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-02 00:01:11.264947 | orchestrator | 00:01:11.264 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 00:01:11.264990 | orchestrator | 00:01:11.264 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 00:01:11.265039 | orchestrator | 00:01:11.264 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 00:01:11.265095 | orchestrator | 00:01:11.265 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 00:01:11.265138 | orchestrator | 00:01:11.265 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.265175 | orchestrator | 00:01:11.265 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.265214 | orchestrator | 00:01:11.265 STDOUT terraform:  + config_drive = true 2025-06-02 00:01:11.265277 | orchestrator | 00:01:11.265 STDOUT terraform:  + created = (known after apply) 2025-06-02 00:01:11.265340 | orchestrator | 00:01:11.265 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 00:01:11.265377 | orchestrator | 00:01:11.265 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 00:01:11.265407 | orchestrator | 00:01:11.265 STDOUT terraform:  + force_delete = false 2025-06-02 00:01:11.265453 | orchestrator | 00:01:11.265 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 00:01:11.265501 | orchestrator | 00:01:11.265 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.265544 | orchestrator | 00:01:11.265 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.265588 | orchestrator | 00:01:11.265 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 00:01:11.265624 | orchestrator | 00:01:11.265 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 00:01:11.265663 | orchestrator | 00:01:11.265 STDOUT terraform:  + name = "testbed-node-0" 2025-06-02 00:01:11.265697 | orchestrator | 00:01:11.265 STDOUT terraform:  + power_state = "active" 2025-06-02 00:01:11.265796 | orchestrator | 00:01:11.265 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.265806 | orchestrator | 00:01:11.265 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 00:01:11.265839 | orchestrator | 00:01:11.265 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 00:01:11.265883 | orchestrator | 00:01:11.265 STDOUT terraform:  + updated = (known after apply) 2025-06-02 00:01:11.265947 | orchestrator | 00:01:11.265 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 00:01:11.265968 | orchestrator | 00:01:11.265 STDOUT terraform:  + block_device { 2025-06-02 00:01:11.266004 | orchestrator | 00:01:11.265 STDOUT terraform:  + boot_index = 0 2025-06-02 00:01:11.266082 | orchestrator | 00:01:11.266 STDOUT terraform:  + delete_on_termination = false 2025-06-02 00:01:11.266120 | orchestrator | 00:01:11.266 STDOUT terraform:  + destination_type = "volume" 2025-06-02 00:01:11.266205 | orchestrator | 00:01:11.266 STDOUT terraform:  + multiattach = false 2025-06-02 00:01:11.266210 | orchestrator | 00:01:11.266 STDOUT terraform:  + source_type = "volume" 2025-06-02 00:01:11.266238 | orchestrator | 00:01:11.266 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.266255 | orchestrator | 00:01:11.266 STDOUT terraform:  } 2025-06-02 00:01:11.266274 | orchestrator | 00:01:11.266 STDOUT terraform:  + network { 2025-06-02 00:01:11.266305 | orchestrator | 00:01:11.266 STDOUT terraform:  + access_network = false 2025-06-02 00:01:11.266360 | orchestrator | 00:01:11.266 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 00:01:11.266420 | orchestrator | 00:01:11.266 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 00:01:11.266462 | orchestrator | 00:01:11.266 STDOUT terraform:  + mac = (known after apply) 2025-06-02 00:01:11.266502 | orchestrator | 00:01:11.266 STDOUT terraform:  + name = (known after apply) 2025-06-02 00:01:11.266547 | orchestrator | 00:01:11.266 STDOUT terraform:  + port = (known after apply) 2025-06-02 00:01:11.266590 | orchestrator | 00:01:11.266 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.266607 | orchestrator | 00:01:11.266 STDOUT terraform:  } 2025-06-02 00:01:11.266626 | orchestrator | 00:01:11.266 STDOUT terraform:  } 2025-06-02 00:01:11.266684 | orchestrator | 00:01:11.266 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-02 00:01:11.266738 | orchestrator | 00:01:11.266 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 00:01:11.266816 | orchestrator | 00:01:11.266 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 00:01:11.266859 | orchestrator | 00:01:11.266 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 00:01:11.266903 | orchestrator | 00:01:11.266 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 00:01:11.266947 | orchestrator | 00:01:11.266 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.266977 | orchestrator | 00:01:11.266 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.267002 | orchestrator | 00:01:11.266 STDOUT terraform:  + config_drive = true 2025-06-02 00:01:11.267046 | orchestrator | 00:01:11.266 STDOUT terraform:  + created = (known after apply) 2025-06-02 00:01:11.267094 | orchestrator | 00:01:11.267 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 00:01:11.267132 | orchestrator | 00:01:11.267 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 00:01:11.267162 | orchestrator | 00:01:11.267 STDOUT terraform:  + force_delete = false 2025-06-02 00:01:11.267214 | orchestrator | 00:01:11.267 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 00:01:11.267290 | orchestrator | 00:01:11.267 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.267338 | orchestrator | 00:01:11.267 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.267378 | orchestrator | 00:01:11.267 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 00:01:11.267407 | orchestrator | 00:01:11.267 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 00:01:11.267444 | orchestrator | 00:01:11.267 STDOUT terraform:  + name = "testbed-node-1" 2025-06-02 00:01:11.267473 | orchestrator | 00:01:11.267 STDOUT terraform:  + power_state = "active" 2025-06-02 00:01:11.267514 | orchestrator | 00:01:11.267 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.267555 | orchestrator | 00:01:11.267 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 00:01:11.267580 | orchestrator | 00:01:11.267 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 00:01:11.267621 | orchestrator | 00:01:11.267 STDOUT terraform:  + updated = (known after apply) 2025-06-02 00:01:11.267679 | orchestrator | 00:01:11.267 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 00:01:11.267697 | orchestrator | 00:01:11.267 STDOUT terraform:  + block_device { 2025-06-02 00:01:11.267726 | orchestrator | 00:01:11.267 STDOUT terraform:  + boot_index = 0 2025-06-02 00:01:11.267790 | orchestrator | 00:01:11.267 STDOUT terraform:  + delete_on_termination = false 2025-06-02 00:01:11.267806 | orchestrator | 00:01:11.267 STDOUT terraform:  + destination_type = "volume" 2025-06-02 00:01:11.267838 | orchestrator | 00:01:11.267 STDOUT terraform:  + multiattach = false 2025-06-02 00:01:11.267878 | orchestrator | 00:01:11.267 STDOUT terraform:  + source_type = "volume" 2025-06-02 00:01:11.267915 | orchestrator | 00:01:11.267 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.267930 | orchestrator | 00:01:11.267 STDOUT terraform:  } 2025-06-02 00:01:11.267948 | orchestrator | 00:01:11.267 STDOUT terraform:  + network { 2025-06-02 00:01:11.267971 | orchestrator | 00:01:11.267 STDOUT terraform:  + access_network = false 2025-06-02 00:01:11.268006 | orchestrator | 00:01:11.267 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 00:01:11.268041 | orchestrator | 00:01:11.268 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 00:01:11.268080 | orchestrator | 00:01:11.268 STDOUT terraform:  + mac = (known after apply) 2025-06-02 00:01:11.268115 | orchestrator | 00:01:11.268 STDOUT terraform:  + name = (known after apply) 2025-06-02 00:01:11.268151 | orchestrator | 00:01:11.268 STDOUT terraform:  + port = (known after apply) 2025-06-02 00:01:11.268186 | orchestrator | 00:01:11.268 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.268206 | orchestrator | 00:01:11.268 STDOUT terraform:  } 2025-06-02 00:01:11.268222 | orchestrator | 00:01:11.268 STDOUT terraform:  } 2025-06-02 00:01:11.268271 | orchestrator | 00:01:11.268 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-02 00:01:11.268317 | orchestrator | 00:01:11.268 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 00:01:11.268362 | orchestrator | 00:01:11.268 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 00:01:11.268400 | orchestrator | 00:01:11.268 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 00:01:11.268440 | orchestrator | 00:01:11.268 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 00:01:11.268480 | orchestrator | 00:01:11.268 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.268507 | orchestrator | 00:01:11.268 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.268531 | orchestrator | 00:01:11.268 STDOUT terraform:  + config_drive = true 2025-06-02 00:01:11.268574 | orchestrator | 00:01:11.268 STDOUT terraform:  + created = (known after apply) 2025-06-02 00:01:11.268636 | orchestrator | 00:01:11.268 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 00:01:11.268674 | orchestrator | 00:01:11.268 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 00:01:11.268702 | orchestrator | 00:01:11.268 STDOUT terraform:  + force_delete = false 2025-06-02 00:01:11.268742 | orchestrator | 00:01:11.268 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 00:01:11.268798 | orchestrator | 00:01:11.268 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.268837 | orchestrator | 00:01:11.268 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.268877 | orchestrator | 00:01:11.268 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 00:01:11.268907 | orchestrator | 00:01:11.268 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 00:01:11.268939 | orchestrator | 00:01:11.268 STDOUT terraform:  + name = "testbed-node-2" 2025-06-02 00:01:11.268968 | orchestrator | 00:01:11.268 STDOUT terraform:  + power_state = "active" 2025-06-02 00:01:11.269010 | orchestrator | 00:01:11.268 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.269049 | orchestrator | 00:01:11.269 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 00:01:11.269074 | orchestrator | 00:01:11.269 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 00:01:11.269113 | orchestrator | 00:01:11.269 STDOUT terraform:  + updated = (known after apply) 2025-06-02 00:01:11.269168 | orchestrator | 00:01:11.269 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 00:01:11.269186 | orchestrator | 00:01:11.269 STDOUT terraform:  + block_device { 2025-06-02 00:01:11.269214 | orchestrator | 00:01:11.269 STDOUT terraform:  + boot_index = 0 2025-06-02 00:01:11.269246 | orchestrator | 00:01:11.269 STDOUT terraform:  + delete_on_termination = false 2025-06-02 00:01:11.269278 | orchestrator | 00:01:11.269 STDOUT terraform:  + destination_type = "volume" 2025-06-02 00:01:11.269310 | orchestrator | 00:01:11.269 STDOUT terraform:  + multiattach = false 2025-06-02 00:01:11.269348 | orchestrator | 00:01:11.269 STDOUT terraform:  + source_type = "volume" 2025-06-02 00:01:11.269392 | orchestrator | 00:01:11.269 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.269406 | orchestrator | 00:01:11.269 STDOUT terraform:  } 2025-06-02 00:01:11.269426 | orchestrator | 00:01:11.269 STDOUT terraform:  + network { 2025-06-02 00:01:11.269447 | orchestrator | 00:01:11.269 STDOUT terraform:  + access_network = false 2025-06-02 00:01:11.269483 | orchestrator | 00:01:11.269 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 00:01:11.269528 | orchestrator | 00:01:11.269 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 00:01:11.269578 | orchestrator | 00:01:11.269 STDOUT terraform:  + mac = (known after apply) 2025-06-02 00:01:11.269631 | orchestrator | 00:01:11.269 STDOUT terraform:  + name = (known after apply) 2025-06-02 00:01:11.269666 | orchestrator | 00:01:11.269 STDOUT terraform:  + port = (known after apply) 2025-06-02 00:01:11.269701 | orchestrator | 00:01:11.269 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.269718 | orchestrator | 00:01:11.269 STDOUT terraform:  } 2025-06-02 00:01:11.269733 | orchestrator | 00:01:11.269 STDOUT terraform:  } 2025-06-02 00:01:11.269816 | orchestrator | 00:01:11.269 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-02 00:01:11.269866 | orchestrator | 00:01:11.269 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 00:01:11.269903 | orchestrator | 00:01:11.269 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 00:01:11.269944 | orchestrator | 00:01:11.269 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 00:01:11.269990 | orchestrator | 00:01:11.269 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 00:01:11.270042 | orchestrator | 00:01:11.269 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.270068 | orchestrator | 00:01:11.270 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.270093 | orchestrator | 00:01:11.270 STDOUT terraform:  + config_drive = true 2025-06-02 00:01:11.270134 | orchestrator | 00:01:11.270 STDOUT terraform:  + created = (known after apply) 2025-06-02 00:01:11.270175 | orchestrator | 00:01:11.270 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 00:01:11.270209 | orchestrator | 00:01:11.270 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 00:01:11.270236 | orchestrator | 00:01:11.270 STDOUT terraform:  + force_delete = false 2025-06-02 00:01:11.270273 | orchestrator | 00:01:11.270 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 00:01:11.270312 | orchestrator | 00:01:11.270 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.270367 | orchestrator | 00:01:11.270 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.270408 | orchestrator | 00:01:11.270 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 00:01:11.270433 | orchestrator | 00:01:11.270 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 00:01:11.270465 | orchestrator | 00:01:11.270 STDOUT terraform:  + name = "testbed-node-3" 2025-06-02 00:01:11.270492 | orchestrator | 00:01:11.270 STDOUT terraform:  + power_state = "active" 2025-06-02 00:01:11.270528 | orchestrator | 00:01:11.270 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.270563 | orchestrator | 00:01:11.270 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 00:01:11.270588 | orchestrator | 00:01:11.270 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 00:01:11.270629 | orchestrator | 00:01:11.270 STDOUT terraform:  + updated = (known after apply) 2025-06-02 00:01:11.270686 | orchestrator | 00:01:11.270 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 00:01:11.270705 | orchestrator | 00:01:11.270 STDOUT terraform:  + block_device { 2025-06-02 00:01:11.270732 | orchestrator | 00:01:11.270 STDOUT terraform:  + boot_index = 0 2025-06-02 00:01:11.270788 | orchestrator | 00:01:11.270 STDOUT terraform:  + delete_on_termination = false 2025-06-02 00:01:11.270808 | orchestrator | 00:01:11.270 STDOUT terraform:  + destination_type = "volume" 2025-06-02 00:01:11.270839 | orchestrator | 00:01:11.270 STDOUT terraform:  + multiattach = false 2025-06-02 00:01:11.270874 | orchestrator | 00:01:11.270 STDOUT terraform:  + source_type = "volume" 2025-06-02 00:01:11.270912 | orchestrator | 00:01:11.270 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.270927 | orchestrator | 00:01:11.270 STDOUT terraform:  } 2025-06-02 00:01:11.270942 | orchestrator | 00:01:11.270 STDOUT terraform:  + network { 2025-06-02 00:01:11.270964 | orchestrator | 00:01:11.270 STDOUT terraform:  + access_network = false 2025-06-02 00:01:11.270996 | orchestrator | 00:01:11.270 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 00:01:11.271027 | orchestrator | 00:01:11.270 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 00:01:11.271060 | orchestrator | 00:01:11.271 STDOUT terraform:  + mac = (known after apply) 2025-06-02 00:01:11.271096 | orchestrator | 00:01:11.271 STDOUT terraform:  + name = (known after apply) 2025-06-02 00:01:11.271130 | orchestrator | 00:01:11.271 STDOUT terraform:  + port = (known after apply) 2025-06-02 00:01:11.271162 | orchestrator | 00:01:11.271 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.271177 | orchestrator | 00:01:11.271 STDOUT terraform:  } 2025-06-02 00:01:11.271191 | orchestrator | 00:01:11.271 STDOUT terraform:  } 2025-06-02 00:01:11.271236 | orchestrator | 00:01:11.271 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-02 00:01:11.271279 | orchestrator | 00:01:11.271 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 00:01:11.271316 | orchestrator | 00:01:11.271 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 00:01:11.271351 | orchestrator | 00:01:11.271 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 00:01:11.271387 | orchestrator | 00:01:11.271 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 00:01:11.271423 | orchestrator | 00:01:11.271 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.271447 | orchestrator | 00:01:11.271 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.271470 | orchestrator | 00:01:11.271 STDOUT terraform:  + config_drive = true 2025-06-02 00:01:11.271506 | orchestrator | 00:01:11.271 STDOUT terraform:  + created = (known after apply) 2025-06-02 00:01:11.271542 | orchestrator | 00:01:11.271 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 00:01:11.271573 | orchestrator | 00:01:11.271 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 00:01:11.271597 | orchestrator | 00:01:11.271 STDOUT terraform:  + force_delete = false 2025-06-02 00:01:11.271633 | orchestrator | 00:01:11.271 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 00:01:11.271670 | orchestrator | 00:01:11.271 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.271707 | orchestrator | 00:01:11.271 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.271744 | orchestrator | 00:01:11.271 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 00:01:11.271790 | orchestrator | 00:01:11.271 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 00:01:11.271832 | orchestrator | 00:01:11.271 STDOUT terraform:  + name = "testbed-node-4" 2025-06-02 00:01:11.271859 | orchestrator | 00:01:11.271 STDOUT terraform:  + power_state = "active" 2025-06-02 00:01:11.271895 | orchestrator | 00:01:11.271 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.271931 | orchestrator | 00:01:11.271 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 00:01:11.271954 | orchestrator | 00:01:11.271 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 00:01:11.271990 | orchestrator | 00:01:11.271 STDOUT terraform:  + updated = (known after apply) 2025-06-02 00:01:11.272043 | orchestrator | 00:01:11.271 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 00:01:11.272052 | orchestrator | 00:01:11.272 STDOUT terraform:  + block_device { 2025-06-02 00:01:11.272079 | orchestrator | 00:01:11.272 STDOUT terraform:  + boot_index = 0 2025-06-02 00:01:11.272109 | orchestrator | 00:01:11.272 STDOUT terraform:  + delete_on_termination = false 2025-06-02 00:01:11.272141 | orchestrator | 00:01:11.272 STDOUT terraform:  + destination_type = "volume" 2025-06-02 00:01:11.272175 | orchestrator | 00:01:11.272 STDOUT terraform:  + multiattach = false 2025-06-02 00:01:11.272209 | orchestrator | 00:01:11.272 STDOUT terraform:  + source_type = "volume" 2025-06-02 00:01:11.272259 | orchestrator | 00:01:11.272 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.272275 | orchestrator | 00:01:11.272 STDOUT terraform:  } 2025-06-02 00:01:11.272291 | orchestrator | 00:01:11.272 STDOUT terraform:  + network { 2025-06-02 00:01:11.272319 | orchestrator | 00:01:11.272 STDOUT terraform:  + access_network = false 2025-06-02 00:01:11.272352 | orchestrator | 00:01:11.272 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 00:01:11.272384 | orchestrator | 00:01:11.272 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 00:01:11.272417 | orchestrator | 00:01:11.272 STDOUT terraform:  + mac = (known after apply) 2025-06-02 00:01:11.272448 | orchestrator | 00:01:11.272 STDOUT terraform:  + name = (known after apply) 2025-06-02 00:01:11.272483 | orchestrator | 00:01:11.272 STDOUT terraform:  + port = (known after apply) 2025-06-02 00:01:11.272516 | orchestrator | 00:01:11.272 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.272533 | orchestrator | 00:01:11.272 STDOUT terraform:  } 2025-06-02 00:01:11.272539 | orchestrator | 00:01:11.272 STDOUT terraform:  } 2025-06-02 00:01:11.272646 | orchestrator | 00:01:11.272 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-02 00:01:11.272689 | orchestrator | 00:01:11.272 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 00:01:11.272726 | orchestrator | 00:01:11.272 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 00:01:11.272792 | orchestrator | 00:01:11.272 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 00:01:11.272830 | orchestrator | 00:01:11.272 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 00:01:11.272867 | orchestrator | 00:01:11.272 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.272893 | orchestrator | 00:01:11.272 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 00:01:11.272916 | orchestrator | 00:01:11.272 STDOUT terraform:  + config_drive = true 2025-06-02 00:01:11.272955 | orchestrator | 00:01:11.272 STDOUT terraform:  + created = (known after apply) 2025-06-02 00:01:11.272992 | orchestrator | 00:01:11.272 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 00:01:11.273037 | orchestrator | 00:01:11.272 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 00:01:11.273071 | orchestrator | 00:01:11.273 STDOUT terraform:  + force_delete = false 2025-06-02 00:01:11.273128 | orchestrator | 00:01:11.273 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 00:01:11.273175 | orchestrator | 00:01:11.273 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.273232 | orchestrator | 00:01:11.273 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 00:01:11.273265 | orchestrator | 00:01:11.273 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 00:01:11.273293 | orchestrator | 00:01:11.273 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 00:01:11.273324 | orchestrator | 00:01:11.273 STDOUT terraform:  + name = "testbed-node-5" 2025-06-02 00:01:11.273349 | orchestrator | 00:01:11.273 STDOUT terraform:  + power_state = "active" 2025-06-02 00:01:11.273387 | orchestrator | 00:01:11.273 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.273426 | orchestrator | 00:01:11.273 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 00:01:11.273450 | orchestrator | 00:01:11.273 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 00:01:11.273487 | orchestrator | 00:01:11.273 STDOUT terraform:  + updated = (known after apply) 2025-06-02 00:01:11.273539 | orchestrator | 00:01:11.273 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 00:01:11.273557 | orchestrator | 00:01:11.273 STDOUT terraform:  + block_device { 2025-06-02 00:01:11.273582 | orchestrator | 00:01:11.273 STDOUT terraform:  + boot_index = 0 2025-06-02 00:01:11.273613 | orchestrator | 00:01:11.273 STDOUT terraform:  + delete_on_termination = false 2025-06-02 00:01:11.273645 | orchestrator | 00:01:11.273 STDOUT terraform:  + destination_type = "volume" 2025-06-02 00:01:11.273674 | orchestrator | 00:01:11.273 STDOUT terraform:  + multiattach = false 2025-06-02 00:01:11.273706 | orchestrator | 00:01:11.273 STDOUT terraform:  + source_type = "volume" 2025-06-02 00:01:11.273744 | orchestrator | 00:01:11.273 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.273795 | orchestrator | 00:01:11.273 STDOUT terraform:  } 2025-06-02 00:01:11.273800 | orchestrator | 00:01:11.273 STDOUT terraform:  + network { 2025-06-02 00:01:11.273805 | orchestrator | 00:01:11.273 STDOUT terraform:  + access_network = false 2025-06-02 00:01:11.273832 | orchestrator | 00:01:11.273 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 00:01:11.273865 | orchestrator | 00:01:11.273 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 00:01:11.273899 | orchestrator | 00:01:11.273 STDOUT terraform:  + mac = (known after apply) 2025-06-02 00:01:11.273937 | orchestrator | 00:01:11.273 STDOUT terraform:  + name = (known after apply) 2025-06-02 00:01:11.273976 | orchestrator | 00:01:11.273 STDOUT terraform:  + port = (known after apply) 2025-06-02 00:01:11.274008 | orchestrator | 00:01:11.273 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 00:01:11.274041 | orchestrator | 00:01:11.274 STDOUT terraform:  } 2025-06-02 00:01:11.274047 | orchestrator | 00:01:11.274 STDOUT terraform:  } 2025-06-02 00:01:11.274086 | orchestrator | 00:01:11.274 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-02 00:01:11.274132 | orchestrator | 00:01:11.274 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-02 00:01:11.274151 | orchestrator | 00:01:11.274 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-02 00:01:11.274186 | orchestrator | 00:01:11.274 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.274213 | orchestrator | 00:01:11.274 STDOUT terraform:  + name = "testbed" 2025-06-02 00:01:11.278150 | orchestrator | 00:01:11.274 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 00:01:11.278218 | orchestrator | 00:01:11.278 STDOUT terraform:  + public_key = (known after apply) 2025-06-02 00:01:11.278223 | orchestrator | 00:01:11.278 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.278257 | orchestrator | 00:01:11.278 STDOUT terraform:  + user_id = (known after apply) 2025-06-02 00:01:11.278264 | orchestrator | 00:01:11.278 STDOUT terraform:  } 2025-06-02 00:01:11.278335 | orchestrator | 00:01:11.278 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-02 00:01:11.278415 | orchestrator | 00:01:11.278 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 00:01:11.278448 | orchestrator | 00:01:11.278 STDOUT terraform:  + device = (known after apply) 2025-06-02 00:01:11.278484 | orchestrator | 00:01:11.278 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.278516 | orchestrator | 00:01:11.278 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 00:01:11.278546 | orchestrator | 00:01:11.278 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.278579 | orchestrator | 00:01:11.278 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 00:01:11.278600 | orchestrator | 00:01:11.278 STDOUT terraform:  } 2025-06-02 00:01:11.278638 | orchestrator | 00:01:11.278 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-02 00:01:11.278699 | orchestrator | 00:01:11.278 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 00:01:11.278734 | orchestrator | 00:01:11.278 STDOUT terraform:  + device = (known after apply) 2025-06-02 00:01:11.278781 | orchestrator | 00:01:11.278 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.278809 | orchestrator | 00:01:11.278 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 00:01:11.278840 | orchestrator | 00:01:11.278 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.278868 | orchestrator | 00:01:11.278 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 00:01:11.278883 | orchestrator | 00:01:11.278 STDOUT terraform:  } 2025-06-02 00:01:11.278963 | orchestrator | 00:01:11.278 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-02 00:01:11.279015 | orchestrator | 00:01:11.278 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 00:01:11.279046 | orchestrator | 00:01:11.279 STDOUT terraform:  + device = (known after apply) 2025-06-02 00:01:11.279078 | orchestrator | 00:01:11.279 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.279107 | orchestrator | 00:01:11.279 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 00:01:11.279140 | orchestrator | 00:01:11.279 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.279169 | orchestrator | 00:01:11.279 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 00:01:11.279185 | orchestrator | 00:01:11.279 STDOUT terraform:  } 2025-06-02 00:01:11.279236 | orchestrator | 00:01:11.279 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-02 00:01:11.279289 | orchestrator | 00:01:11.279 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 00:01:11.279317 | orchestrator | 00:01:11.279 STDOUT terraform:  + device = (known after apply) 2025-06-02 00:01:11.279347 | orchestrator | 00:01:11.279 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.279377 | orchestrator | 00:01:11.279 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 00:01:11.279408 | orchestrator | 00:01:11.279 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.279437 | orchestrator | 00:01:11.279 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 00:01:11.279451 | orchestrator | 00:01:11.279 STDOUT terraform:  } 2025-06-02 00:01:11.279508 | orchestrator | 00:01:11.279 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-02 00:01:11.279556 | orchestrator | 00:01:11.279 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 00:01:11.279585 | orchestrator | 00:01:11.279 STDOUT terraform:  + device = (known after apply) 2025-06-02 00:01:11.279615 | orchestrator | 00:01:11.279 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.279643 | orchestrator | 00:01:11.279 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 00:01:11.279672 | orchestrator | 00:01:11.279 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.279704 | orchestrator | 00:01:11.279 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 00:01:11.279710 | orchestrator | 00:01:11.279 STDOUT terraform:  } 2025-06-02 00:01:11.279793 | orchestrator | 00:01:11.279 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-02 00:01:11.279829 | orchestrator | 00:01:11.279 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 00:01:11.279857 | orchestrator | 00:01:11.279 STDOUT terraform:  + device = (known after apply) 2025-06-02 00:01:11.279887 | orchestrator | 00:01:11.279 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.279927 | orchestrator | 00:01:11.279 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 00:01:11.279957 | orchestrator | 00:01:11.279 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.279986 | orchestrator | 00:01:11.279 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 00:01:11.279992 | orchestrator | 00:01:11.279 STDOUT terraform:  } 2025-06-02 00:01:11.280051 | orchestrator | 00:01:11.279 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-02 00:01:11.280106 | orchestrator | 00:01:11.280 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 00:01:11.280133 | orchestrator | 00:01:11.280 STDOUT terraform:  + device = (known after apply) 2025-06-02 00:01:11.280163 | orchestrator | 00:01:11.280 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.280192 | orchestrator | 00:01:11.280 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 00:01:11.280222 | orchestrator | 00:01:11.280 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.280251 | orchestrator | 00:01:11.280 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 00:01:11.280256 | orchestrator | 00:01:11.280 STDOUT terraform:  } 2025-06-02 00:01:11.280311 | orchestrator | 00:01:11.280 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-02 00:01:11.280361 | orchestrator | 00:01:11.280 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 00:01:11.280392 | orchestrator | 00:01:11.280 STDOUT terraform:  + device = (known after apply) 2025-06-02 00:01:11.280447 | orchestrator | 00:01:11.280 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.280475 | orchestrator | 00:01:11.280 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 00:01:11.280505 | orchestrator | 00:01:11.280 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.280534 | orchestrator | 00:01:11.280 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 00:01:11.280540 | orchestrator | 00:01:11.280 STDOUT terraform:  } 2025-06-02 00:01:11.280596 | orchestrator | 00:01:11.280 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-02 00:01:11.280645 | orchestrator | 00:01:11.280 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 00:01:11.280676 | orchestrator | 00:01:11.280 STDOUT terraform:  + device = (known after apply) 2025-06-02 00:01:11.280707 | orchestrator | 00:01:11.280 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.280739 | orchestrator | 00:01:11.280 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 00:01:11.280802 | orchestrator | 00:01:11.280 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.280834 | orchestrator | 00:01:11.280 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 00:01:11.280839 | orchestrator | 00:01:11.280 STDOUT terraform:  } 2025-06-02 00:01:11.280901 | orchestrator | 00:01:11.280 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-02 00:01:11.280960 | orchestrator | 00:01:11.280 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-02 00:01:11.280988 | orchestrator | 00:01:11.280 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 00:01:11.281017 | orchestrator | 00:01:11.280 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-02 00:01:11.281047 | orchestrator | 00:01:11.281 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.281076 | orchestrator | 00:01:11.281 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 00:01:11.281105 | orchestrator | 00:01:11.281 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.281112 | orchestrator | 00:01:11.281 STDOUT terraform:  } 2025-06-02 00:01:11.281167 | orchestrator | 00:01:11.281 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-02 00:01:11.281215 | orchestrator | 00:01:11.281 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-02 00:01:11.281241 | orchestrator | 00:01:11.281 STDOUT terraform:  + address = (known after apply) 2025-06-02 00:01:11.281266 | orchestrator | 00:01:11.281 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.281292 | orchestrator | 00:01:11.281 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 00:01:11.281318 | orchestrator | 00:01:11.281 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 00:01:11.281343 | orchestrator | 00:01:11.281 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 00:01:11.281369 | orchestrator | 00:01:11.281 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.281388 | orchestrator | 00:01:11.281 STDOUT terraform:  + pool = "public" 2025-06-02 00:01:11.281408 | orchestrator | 00:01:11.281 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 00:01:11.281435 | orchestrator | 00:01:11.281 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.281466 | orchestrator | 00:01:11.281 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 00:01:11.281473 | orchestrator | 00:01:11.281 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.281492 | orchestrator | 00:01:11.281 STDOUT terraform:  } 2025-06-02 00:01:11.281539 | orchestrator | 00:01:11.281 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-02 00:01:11.281583 | orchestrator | 00:01:11.281 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-02 00:01:11.281623 | orchestrator | 00:01:11.281 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 00:01:11.281661 | orchestrator | 00:01:11.281 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.281680 | orchestrator | 00:01:11.281 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 00:01:11.281686 | orchestrator | 00:01:11.281 STDOUT terraform:  + "nova", 2025-06-02 00:01:11.281705 | orchestrator | 00:01:11.281 STDOUT terraform:  ] 2025-06-02 00:01:11.281742 | orchestrator | 00:01:11.281 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 00:01:11.281807 | orchestrator | 00:01:11.281 STDOUT terraform:  + external = (known after apply) 2025-06-02 00:01:11.281849 | orchestrator | 00:01:11.281 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.281889 | orchestrator | 00:01:11.281 STDOUT terraform:  + mtu = (known after apply) 2025-06-02 00:01:11.281933 | orchestrator | 00:01:11.281 STDOUT terraform:  + name = "net-testbed-management" 2025-06-02 00:01:11.281970 | orchestrator | 00:01:11.281 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 00:01:11.282008 | orchestrator | 00:01:11.281 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 00:01:11.282069 | orchestrator | 00:01:11.281 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.282104 | orchestrator | 00:01:11.282 STDOUT terraform:  + shared = (known after apply) 2025-06-02 00:01:11.282143 | orchestrator | 00:01:11.282 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.282180 | orchestrator | 00:01:11.282 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-02 00:01:11.282210 | orchestrator | 00:01:11.282 STDOUT terraform:  + segments (known after apply) 2025-06-02 00:01:11.282216 | orchestrator | 00:01:11.282 STDOUT terraform:  } 2025-06-02 00:01:11.282267 | orchestrator | 00:01:11.282 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-02 00:01:11.282316 | orchestrator | 00:01:11.282 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-02 00:01:11.282369 | orchestrator | 00:01:11.282 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 00:01:11.282423 | orchestrator | 00:01:11.282 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 00:01:11.282461 | orchestrator | 00:01:11.282 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 00:01:11.282498 | orchestrator | 00:01:11.282 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.282536 | orchestrator | 00:01:11.282 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 00:01:11.282576 | orchestrator | 00:01:11.282 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 00:01:11.282612 | orchestrator | 00:01:11.282 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 00:01:11.282654 | orchestrator | 00:01:11.282 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 00:01:11.282693 | orchestrator | 00:01:11.282 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.282730 | orchestrator | 00:01:11.282 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 00:01:11.282783 | orchestrator | 00:01:11.282 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 00:01:11.282818 | orchestrator | 00:01:11.282 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 00:01:11.282860 | orchestrator | 00:01:11.282 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 00:01:11.282898 | orchestrator | 00:01:11.282 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.282936 | orchestrator | 00:01:11.282 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 00:01:11.282973 | orchestrator | 00:01:11.282 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.282999 | orchestrator | 00:01:11.282 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.283020 | orchestrator | 00:01:11.282 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 00:01:11.283026 | orchestrator | 00:01:11.283 STDOUT terraform:  } 2025-06-02 00:01:11.283052 | orchestrator | 00:01:11.283 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.283086 | orchestrator | 00:01:11.283 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 00:01:11.283092 | orchestrator | 00:01:11.283 STDOUT terraform:  } 2025-06-02 00:01:11.283122 | orchestrator | 00:01:11.283 STDOUT terraform:  + binding (known after apply) 2025-06-02 00:01:11.283128 | orchestrator | 00:01:11.283 STDOUT terraform:  + fixed_ip { 2025-06-02 00:01:11.283159 | orchestrator | 00:01:11.283 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-02 00:01:11.283191 | orchestrator | 00:01:11.283 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 00:01:11.283197 | orchestrator | 00:01:11.283 STDOUT terraform:  } 2025-06-02 00:01:11.283202 | orchestrator | 00:01:11.283 STDOUT terraform:  } 2025-06-02 00:01:11.283260 | orchestrator | 00:01:11.283 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-02 00:01:11.283306 | orchestrator | 00:01:11.283 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 00:01:11.283343 | orchestrator | 00:01:11.283 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 00:01:11.283380 | orchestrator | 00:01:11.283 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 00:01:11.283417 | orchestrator | 00:01:11.283 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 00:01:11.283454 | orchestrator | 00:01:11.283 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.283493 | orchestrator | 00:01:11.283 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 00:01:11.283529 | orchestrator | 00:01:11.283 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 00:01:11.283586 | orchestrator | 00:01:11.283 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 00:01:11.283624 | orchestrator | 00:01:11.283 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 00:01:11.283663 | orchestrator | 00:01:11.283 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.283700 | orchestrator | 00:01:11.283 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 00:01:11.283742 | orchestrator | 00:01:11.283 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 00:01:11.283818 | orchestrator | 00:01:11.283 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 00:01:11.283864 | orchestrator | 00:01:11.283 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 00:01:11.283904 | orchestrator | 00:01:11.283 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.283942 | orchestrator | 00:01:11.283 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 00:01:11.283981 | orchestrator | 00:01:11.283 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.284029 | orchestrator | 00:01:11.283 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.284066 | orchestrator | 00:01:11.284 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 00:01:11.284072 | orchestrator | 00:01:11.284 STDOUT terraform:  } 2025-06-02 00:01:11.284100 | orchestrator | 00:01:11.284 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.284131 | orchestrator | 00:01:11.284 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 00:01:11.284137 | orchestrator | 00:01:11.284 STDOUT terraform:  } 2025-06-02 00:01:11.284164 | orchestrator | 00:01:11.284 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.284194 | orchestrator | 00:01:11.284 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 00:01:11.284200 | orchestrator | 00:01:11.284 STDOUT terraform:  } 2025-06-02 00:01:11.284227 | orchestrator | 00:01:11.284 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.284261 | orchestrator | 00:01:11.284 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 00:01:11.284267 | orchestrator | 00:01:11.284 STDOUT terraform:  } 2025-06-02 00:01:11.284296 | orchestrator | 00:01:11.284 STDOUT terraform:  + binding (known after apply) 2025-06-02 00:01:11.284302 | orchestrator | 00:01:11.284 STDOUT terraform:  + fixed_ip { 2025-06-02 00:01:11.284333 | orchestrator | 00:01:11.284 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-02 00:01:11.284363 | orchestrator | 00:01:11.284 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 00:01:11.284370 | orchestrator | 00:01:11.284 STDOUT terraform:  } 2025-06-02 00:01:11.284375 | orchestrator | 00:01:11.284 STDOUT terraform:  } 2025-06-02 00:01:11.284431 | orchestrator | 00:01:11.284 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-02 00:01:11.284477 | orchestrator | 00:01:11.284 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 00:01:11.284518 | orchestrator | 00:01:11.284 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 00:01:11.284556 | orchestrator | 00:01:11.284 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 00:01:11.284594 | orchestrator | 00:01:11.284 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 00:01:11.284635 | orchestrator | 00:01:11.284 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.284669 | orchestrator | 00:01:11.284 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 00:01:11.284706 | orchestrator | 00:01:11.284 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 00:01:11.284766 | orchestrator | 00:01:11.284 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 00:01:11.284804 | orchestrator | 00:01:11.284 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 00:01:11.284837 | orchestrator | 00:01:11.284 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.284879 | orchestrator | 00:01:11.284 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 00:01:11.284916 | orchestrator | 00:01:11.284 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 00:01:11.284953 | orchestrator | 00:01:11.284 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 00:01:11.284992 | orchestrator | 00:01:11.284 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 00:01:11.285028 | orchestrator | 00:01:11.284 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.285067 | orchestrator | 00:01:11.285 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 00:01:11.285105 | orchestrator | 00:01:11.285 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.285125 | orchestrator | 00:01:11.285 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.285154 | orchestrator | 00:01:11.285 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 00:01:11.285160 | orchestrator | 00:01:11.285 STDOUT terraform:  } 2025-06-02 00:01:11.285186 | orchestrator | 00:01:11.285 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.285216 | orchestrator | 00:01:11.285 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 00:01:11.285222 | orchestrator | 00:01:11.285 STDOUT terraform:  } 2025-06-02 00:01:11.285248 | orchestrator | 00:01:11.285 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.285276 | orchestrator | 00:01:11.285 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 00:01:11.285282 | orchestrator | 00:01:11.285 STDOUT terraform:  } 2025-06-02 00:01:11.285307 | orchestrator | 00:01:11.285 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.285337 | orchestrator | 00:01:11.285 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 00:01:11.285344 | orchestrator | 00:01:11.285 STDOUT terraform:  } 2025-06-02 00:01:11.285374 | orchestrator | 00:01:11.285 STDOUT terraform:  + binding (known after apply) 2025-06-02 00:01:11.285380 | orchestrator | 00:01:11.285 STDOUT terraform:  + fixed_ip { 2025-06-02 00:01:11.285413 | orchestrator | 00:01:11.285 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-02 00:01:11.285442 | orchestrator | 00:01:11.285 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 00:01:11.285449 | orchestrator | 00:01:11.285 STDOUT terraform:  } 2025-06-02 00:01:11.285469 | orchestrator | 00:01:11.285 STDOUT terraform:  } 2025-06-02 00:01:11.285513 | orchestrator | 00:01:11.285 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-02 00:01:11.285562 | orchestrator | 00:01:11.285 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 00:01:11.285599 | orchestrator | 00:01:11.285 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 00:01:11.285635 | orchestrator | 00:01:11.285 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 00:01:11.285672 | orchestrator | 00:01:11.285 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 00:01:11.285711 | orchestrator | 00:01:11.285 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.285760 | orchestrator | 00:01:11.285 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 00:01:11.285810 | orchestrator | 00:01:11.285 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 00:01:11.285853 | orchestrator | 00:01:11.285 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 00:01:11.285892 | orchestrator | 00:01:11.285 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 00:01:11.285934 | orchestrator | 00:01:11.285 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.285969 | orchestrator | 00:01:11.285 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 00:01:11.286008 | orchestrator | 00:01:11.285 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 00:01:11.286063 | orchestrator | 00:01:11.285 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 00:01:11.286101 | orchestrator | 00:01:11.286 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 00:01:11.286146 | orchestrator | 00:01:11.286 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.286196 | orchestrator | 00:01:11.286 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 00:01:11.286248 | orchestrator | 00:01:11.286 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.286274 | orchestrator | 00:01:11.286 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.286307 | orchestrator | 00:01:11.286 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 00:01:11.286313 | orchestrator | 00:01:11.286 STDOUT terraform:  } 2025-06-02 00:01:11.286351 | orchestrator | 00:01:11.286 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.286397 | orchestrator | 00:01:11.286 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 00:01:11.286421 | orchestrator | 00:01:11.286 STDOUT terraform:  } 2025-06-02 00:01:11.286448 | orchestrator | 00:01:11.286 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.286479 | orchestrator | 00:01:11.286 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 00:01:11.286485 | orchestrator | 00:01:11.286 STDOUT terraform:  } 2025-06-02 00:01:11.286513 | orchestrator | 00:01:11.286 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.286543 | orchestrator | 00:01:11.286 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 00:01:11.286553 | orchestrator | 00:01:11.286 STDOUT terraform:  } 2025-06-02 00:01:11.286573 | orchestrator | 00:01:11.286 STDOUT terraform:  + binding (known after apply) 2025-06-02 00:01:11.286579 | orchestrator | 00:01:11.286 STDOUT terraform:  + fixed_ip { 2025-06-02 00:01:11.286626 | orchestrator | 00:01:11.286 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-02 00:01:11.286665 | orchestrator | 00:01:11.286 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 00:01:11.286672 | orchestrator | 00:01:11.286 STDOUT terraform:  } 2025-06-02 00:01:11.286677 | orchestrator | 00:01:11.286 STDOUT terraform:  } 2025-06-02 00:01:11.286734 | orchestrator | 00:01:11.286 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-02 00:01:11.286796 | orchestrator | 00:01:11.286 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 00:01:11.286834 | orchestrator | 00:01:11.286 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 00:01:11.286873 | orchestrator | 00:01:11.286 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 00:01:11.286908 | orchestrator | 00:01:11.286 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 00:01:11.286946 | orchestrator | 00:01:11.286 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.286988 | orchestrator | 00:01:11.286 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 00:01:11.287042 | orchestrator | 00:01:11.286 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 00:01:11.287097 | orchestrator | 00:01:11.287 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 00:01:11.287137 | orchestrator | 00:01:11.287 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 00:01:11.287179 | orchestrator | 00:01:11.287 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.287217 | orchestrator | 00:01:11.287 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 00:01:11.287255 | orchestrator | 00:01:11.287 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 00:01:11.287292 | orchestrator | 00:01:11.287 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 00:01:11.287329 | orchestrator | 00:01:11.287 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 00:01:11.287367 | orchestrator | 00:01:11.287 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.287442 | orchestrator | 00:01:11.287 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 00:01:11.287479 | orchestrator | 00:01:11.287 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.287504 | orchestrator | 00:01:11.287 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.287534 | orchestrator | 00:01:11.287 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 00:01:11.287541 | orchestrator | 00:01:11.287 STDOUT terraform:  } 2025-06-02 00:01:11.287566 | orchestrator | 00:01:11.287 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.287592 | orchestrator | 00:01:11.287 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 00:01:11.287604 | orchestrator | 00:01:11.287 STDOUT terraform:  } 2025-06-02 00:01:11.287623 | orchestrator | 00:01:11.287 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.287654 | orchestrator | 00:01:11.287 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 00:01:11.287661 | orchestrator | 00:01:11.287 STDOUT terraform:  } 2025-06-02 00:01:11.287688 | orchestrator | 00:01:11.287 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.287738 | orchestrator | 00:01:11.287 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 00:01:11.287784 | orchestrator | 00:01:11.287 STDOUT terraform:  } 2025-06-02 00:01:11.287816 | orchestrator | 00:01:11.287 STDOUT terraform:  + binding (known after apply) 2025-06-02 00:01:11.287822 | orchestrator | 00:01:11.287 STDOUT terraform:  + fixed_ip { 2025-06-02 00:01:11.287865 | orchestrator | 00:01:11.287 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-02 00:01:11.287902 | orchestrator | 00:01:11.287 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 00:01:11.287909 | orchestrator | 00:01:11.287 STDOUT terraform:  } 2025-06-02 00:01:11.287923 | orchestrator | 00:01:11.287 STDOUT terraform:  } 2025-06-02 00:01:11.287976 | orchestrator | 00:01:11.287 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-02 00:01:11.288019 | orchestrator | 00:01:11.287 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 00:01:11.288058 | orchestrator | 00:01:11.288 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 00:01:11.288097 | orchestrator | 00:01:11.288 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 00:01:11.288135 | orchestrator | 00:01:11.288 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 00:01:11.288176 | orchestrator | 00:01:11.288 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.288217 | orchestrator | 00:01:11.288 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 00:01:11.288256 | orchestrator | 00:01:11.288 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 00:01:11.288292 | orchestrator | 00:01:11.288 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 00:01:11.288333 | orchestrator | 00:01:11.288 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 00:01:11.288372 | orchestrator | 00:01:11.288 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.288414 | orchestrator | 00:01:11.288 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 00:01:11.288450 | orchestrator | 00:01:11.288 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 00:01:11.288485 | orchestrator | 00:01:11.288 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 00:01:11.288523 | orchestrator | 00:01:11.288 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 00:01:11.288562 | orchestrator | 00:01:11.288 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.288600 | orchestrator | 00:01:11.288 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 00:01:11.288634 | orchestrator | 00:01:11.288 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.288654 | orchestrator | 00:01:11.288 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.288682 | orchestrator | 00:01:11.288 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 00:01:11.288688 | orchestrator | 00:01:11.288 STDOUT terraform:  } 2025-06-02 00:01:11.288713 | orchestrator | 00:01:11.288 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.288744 | orchestrator | 00:01:11.288 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 00:01:11.288762 | orchestrator | 00:01:11.288 STDOUT terraform:  } 2025-06-02 00:01:11.288789 | orchestrator | 00:01:11.288 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.288819 | orchestrator | 00:01:11.288 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 00:01:11.288825 | orchestrator | 00:01:11.288 STDOUT terraform:  } 2025-06-02 00:01:11.288850 | orchestrator | 00:01:11.288 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.288880 | orchestrator | 00:01:11.288 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 00:01:11.288887 | orchestrator | 00:01:11.288 STDOUT terraform:  } 2025-06-02 00:01:11.288915 | orchestrator | 00:01:11.288 STDOUT terraform:  + binding (known after apply) 2025-06-02 00:01:11.288921 | orchestrator | 00:01:11.288 STDOUT terraform:  + fixed_ip { 2025-06-02 00:01:11.288952 | orchestrator | 00:01:11.288 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-02 00:01:11.288982 | orchestrator | 00:01:11.288 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 00:01:11.288988 | orchestrator | 00:01:11.288 STDOUT terraform:  } 2025-06-02 00:01:11.289006 | orchestrator | 00:01:11.288 STDOUT terraform:  } 2025-06-02 00:01:11.289056 | orchestrator | 00:01:11.288 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-02 00:01:11.289104 | orchestrator | 00:01:11.289 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 00:01:11.289143 | orchestrator | 00:01:11.289 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 00:01:11.289181 | orchestrator | 00:01:11.289 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 00:01:11.289217 | orchestrator | 00:01:11.289 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 00:01:11.289256 | orchestrator | 00:01:11.289 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.289293 | orchestrator | 00:01:11.289 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 00:01:11.289330 | orchestrator | 00:01:11.289 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 00:01:11.289375 | orchestrator | 00:01:11.289 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 00:01:11.289428 | orchestrator | 00:01:11.289 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 00:01:11.289488 | orchestrator | 00:01:11.289 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.289541 | orchestrator | 00:01:11.289 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 00:01:11.289581 | orchestrator | 00:01:11.289 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 00:01:11.289618 | orchestrator | 00:01:11.289 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 00:01:11.289657 | orchestrator | 00:01:11.289 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 00:01:11.289696 | orchestrator | 00:01:11.289 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.289734 | orchestrator | 00:01:11.289 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 00:01:11.289786 | orchestrator | 00:01:11.289 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.289812 | orchestrator | 00:01:11.289 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.289839 | orchestrator | 00:01:11.289 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 00:01:11.289845 | orchestrator | 00:01:11.289 STDOUT terraform:  } 2025-06-02 00:01:11.289873 | orchestrator | 00:01:11.289 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.289904 | orchestrator | 00:01:11.289 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 00:01:11.289910 | orchestrator | 00:01:11.289 STDOUT terraform:  } 2025-06-02 00:01:11.289938 | orchestrator | 00:01:11.289 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.289971 | orchestrator | 00:01:11.289 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 00:01:11.289978 | orchestrator | 00:01:11.289 STDOUT terraform:  } 2025-06-02 00:01:11.290005 | orchestrator | 00:01:11.289 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 00:01:11.290057 | orchestrator | 00:01:11.289 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 00:01:11.290064 | orchestrator | 00:01:11.290 STDOUT terraform:  } 2025-06-02 00:01:11.290092 | orchestrator | 00:01:11.290 STDOUT terraform:  + binding (known after apply) 2025-06-02 00:01:11.290099 | orchestrator | 00:01:11.290 STDOUT terraform:  + fixed_ip { 2025-06-02 00:01:11.290132 | orchestrator | 00:01:11.290 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-02 00:01:11.290163 | orchestrator | 00:01:11.290 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 00:01:11.290169 | orchestrator | 00:01:11.290 STDOUT terraform:  } 2025-06-02 00:01:11.290174 | orchestrator | 00:01:11.290 STDOUT terraform:  } 2025-06-02 00:01:11.290231 | orchestrator | 00:01:11.290 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-02 00:01:11.290280 | orchestrator | 00:01:11.290 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-02 00:01:11.290299 | orchestrator | 00:01:11.290 STDOUT terraform:  + force_destroy = false 2025-06-02 00:01:11.290332 | orchestrator | 00:01:11.290 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.290364 | orchestrator | 00:01:11.290 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 00:01:11.290394 | orchestrator | 00:01:11.290 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.290423 | orchestrator | 00:01:11.290 STDOUT terraform:  + router_id = (known after apply) 2025-06-02 00:01:11.290453 | orchestrator | 00:01:11.290 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 00:01:11.290459 | orchestrator | 00:01:11.290 STDOUT terraform:  } 2025-06-02 00:01:11.290501 | orchestrator | 00:01:11.290 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-02 00:01:11.290540 | orchestrator | 00:01:11.290 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-02 00:01:11.290578 | orchestrator | 00:01:11.290 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 00:01:11.290615 | orchestrator | 00:01:11.290 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.290640 | orchestrator | 00:01:11.290 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 00:01:11.290646 | orchestrator | 00:01:11.290 STDOUT terraform:  + "nova", 2025-06-02 00:01:11.290670 | orchestrator | 00:01:11.290 STDOUT terraform:  ] 2025-06-02 00:01:11.290707 | orchestrator | 00:01:11.290 STDOUT terraform:  + distributed = (known after apply) 2025-06-02 00:01:11.290743 | orchestrator | 00:01:11.290 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-02 00:01:11.290809 | orchestrator | 00:01:11.290 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-02 00:01:11.290848 | orchestrator | 00:01:11.290 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.290879 | orchestrator | 00:01:11.290 STDOUT terraform:  + name = "testbed" 2025-06-02 00:01:11.290917 | orchestrator | 00:01:11.290 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.290955 | orchestrator | 00:01:11.290 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.290985 | orchestrator | 00:01:11.290 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-02 00:01:11.290991 | orchestrator | 00:01:11.290 STDOUT terraform:  } 2025-06-02 00:01:11.291050 | orchestrator | 00:01:11.290 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-02 00:01:11.291104 | orchestrator | 00:01:11.291 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-02 00:01:11.291123 | orchestrator | 00:01:11.291 STDOUT terraform:  + description = "ssh" 2025-06-02 00:01:11.291148 | orchestrator | 00:01:11.291 STDOUT terraform:  + direction = "ingress" 2025-06-02 00:01:11.291167 | orchestrator | 00:01:11.291 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 00:01:11.291201 | orchestrator | 00:01:11.291 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.291221 | orchestrator | 00:01:11.291 STDOUT terraform:  + port_range_max = 22 2025-06-02 00:01:11.291245 | orchestrator | 00:01:11.291 STDOUT terraform:  + port_range_min = 22 2025-06-02 00:01:11.291251 | orchestrator | 00:01:11.291 STDOUT terraform:  + protocol = "tcp" 2025-06-02 00:01:11.291290 | orchestrator | 00:01:11.291 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.291324 | orchestrator | 00:01:11.291 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 00:01:11.291344 | orchestrator | 00:01:11.291 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 00:01:11.291374 | orchestrator | 00:01:11.291 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 00:01:11.291409 | orchestrator | 00:01:11.291 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.291416 | orchestrator | 00:01:11.291 STDOUT terraform:  } 2025-06-02 00:01:11.291520 | orchestrator | 00:01:11.291 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-02 00:01:11.291584 | orchestrator | 00:01:11.291 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-02 00:01:11.291619 | orchestrator | 00:01:11.291 STDOUT terraform:  + description = "wireguard" 2025-06-02 00:01:11.291657 | orchestrator | 00:01:11.291 STDOUT terraform:  + direction = "ingress" 2025-06-02 00:01:11.291695 | orchestrator | 00:01:11.291 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 00:01:11.291742 | orchestrator | 00:01:11.291 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.291811 | orchestrator | 00:01:11.291 STDOUT terraform:  + port_range_max = 51820 2025-06-02 00:01:11.291849 | orchestrator | 00:01:11.291 STDOUT terraform:  + port_range_min = 51820 2025-06-02 00:01:11.291877 | orchestrator | 00:01:11.291 STDOUT terraform:  + protocol = "udp" 2025-06-02 00:01:11.291913 | orchestrator | 00:01:11.291 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.291953 | orchestrator | 00:01:11.291 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 00:01:11.291980 | orchestrator | 00:01:11.291 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 00:01:11.292012 | orchestrator | 00:01:11.291 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 00:01:11.292045 | orchestrator | 00:01:11.292 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.292056 | orchestrator | 00:01:11.292 STDOUT terraform:  } 2025-06-02 00:01:11.292111 | orchestrator | 00:01:11.292 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-02 00:01:11.292165 | orchestrator | 00:01:11.292 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-02 00:01:11.292192 | orchestrator | 00:01:11.292 STDOUT terraform:  + direction = "ingress" 2025-06-02 00:01:11.292218 | orchestrator | 00:01:11.292 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 00:01:11.292257 | orchestrator | 00:01:11.292 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.292265 | orchestrator | 00:01:11.292 STDOUT terraform:  + protocol = "tcp" 2025-06-02 00:01:11.292301 | orchestrator | 00:01:11.292 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.292336 | orchestrator | 00:01:11.292 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 00:01:11.292367 | orchestrator | 00:01:11.292 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 00:01:11.292398 | orchestrator | 00:01:11.292 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 00:01:11.292430 | orchestrator | 00:01:11.292 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.292439 | orchestrator | 00:01:11.292 STDOUT terraform:  } 2025-06-02 00:01:11.292493 | orchestrator | 00:01:11.292 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-02 00:01:11.292550 | orchestrator | 00:01:11.292 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-02 00:01:11.292575 | orchestrator | 00:01:11.292 STDOUT terraform:  + direction = "ingress" 2025-06-02 00:01:11.292595 | orchestrator | 00:01:11.292 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 00:01:11.292625 | orchestrator | 00:01:11.292 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.292644 | orchestrator | 00:01:11.292 STDOUT terraform:  + protocol = "udp" 2025-06-02 00:01:11.292673 | orchestrator | 00:01:11.292 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.292709 | orchestrator | 00:01:11.292 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 00:01:11.292740 | orchestrator | 00:01:11.292 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 00:01:11.292786 | orchestrator | 00:01:11.292 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 00:01:11.292818 | orchestrator | 00:01:11.292 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.292824 | orchestrator | 00:01:11.292 STDOUT terraform:  } 2025-06-02 00:01:11.292881 | orchestrator | 00:01:11.292 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-02 00:01:11.292936 | orchestrator | 00:01:11.292 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-02 00:01:11.292962 | orchestrator | 00:01:11.292 STDOUT terraform:  + direction = "ingress" 2025-06-02 00:01:11.292981 | orchestrator | 00:01:11.292 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 00:01:11.293011 | orchestrator | 00:01:11.292 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.293030 | orchestrator | 00:01:11.293 STDOUT terraform:  + protocol = "icmp" 2025-06-02 00:01:11.293061 | orchestrator | 00:01:11.293 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.293093 | orchestrator | 00:01:11.293 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 00:01:11.293124 | orchestrator | 00:01:11.293 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 00:01:11.293154 | orchestrator | 00:01:11.293 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 00:01:11.293186 | orchestrator | 00:01:11.293 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.293192 | orchestrator | 00:01:11.293 STDOUT terraform:  } 2025-06-02 00:01:11.293250 | orchestrator | 00:01:11.293 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-02 00:01:11.293300 | orchestrator | 00:01:11.293 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-02 00:01:11.293325 | orchestrator | 00:01:11.293 STDOUT terraform:  + direction = "ingress" 2025-06-02 00:01:11.293344 | orchestrator | 00:01:11.293 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 00:01:11.293379 | orchestrator | 00:01:11.293 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.293401 | orchestrator | 00:01:11.293 STDOUT terraform:  + protocol = "tcp" 2025-06-02 00:01:11.293429 | orchestrator | 00:01:11.293 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.293461 | orchestrator | 00:01:11.293 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 00:01:11.293486 | orchestrator | 00:01:11.293 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 00:01:11.293517 | orchestrator | 00:01:11.293 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 00:01:11.293547 | orchestrator | 00:01:11.293 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.293554 | orchestrator | 00:01:11.293 STDOUT terraform:  } 2025-06-02 00:01:11.293609 | orchestrator | 00:01:11.293 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-02 00:01:11.293662 | orchestrator | 00:01:11.293 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-02 00:01:11.293688 | orchestrator | 00:01:11.293 STDOUT terraform:  + direction = "ingress" 2025-06-02 00:01:11.293707 | orchestrator | 00:01:11.293 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 00:01:11.293743 | orchestrator | 00:01:11.293 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.293773 | orchestrator | 00:01:11.293 STDOUT terraform:  + protocol = "udp" 2025-06-02 00:01:11.293803 | orchestrator | 00:01:11.293 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.293835 | orchestrator | 00:01:11.293 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 00:01:11.293865 | orchestrator | 00:01:11.293 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 00:01:11.293896 | orchestrator | 00:01:11.293 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 00:01:11.293927 | orchestrator | 00:01:11.293 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.293933 | orchestrator | 00:01:11.293 STDOUT terraform:  } 2025-06-02 00:01:11.293990 | orchestrator | 00:01:11.293 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-02 00:01:11.294060 | orchestrator | 00:01:11.293 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-02 00:01:11.294080 | orchestrator | 00:01:11.294 STDOUT terraform:  + direction = "ingress" 2025-06-02 00:01:11.294099 | orchestrator | 00:01:11.294 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 00:01:11.294131 | orchestrator | 00:01:11.294 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.294150 | orchestrator | 00:01:11.294 STDOUT terraform:  + protocol = "icmp" 2025-06-02 00:01:11.294182 | orchestrator | 00:01:11.294 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.294215 | orchestrator | 00:01:11.294 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 00:01:11.294238 | orchestrator | 00:01:11.294 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 00:01:11.294266 | orchestrator | 00:01:11.294 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 00:01:11.294301 | orchestrator | 00:01:11.294 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.294306 | orchestrator | 00:01:11.294 STDOUT terraform:  } 2025-06-02 00:01:11.294363 | orchestrator | 00:01:11.294 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-02 00:01:11.294428 | orchestrator | 00:01:11.294 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-02 00:01:11.294464 | orchestrator | 00:01:11.294 STDOUT terraform:  + description = "vrrp" 2025-06-02 00:01:11.294505 | orchestrator | 00:01:11.294 STDOUT terraform:  + direction = "ingress" 2025-06-02 00:01:11.294534 | orchestrator | 00:01:11.294 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 00:01:11.294569 | orchestrator | 00:01:11.294 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.294596 | orchestrator | 00:01:11.294 STDOUT terraform:  + protocol = "112" 2025-06-02 00:01:11.294629 | orchestrator | 00:01:11.294 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.294657 | orchestrator | 00:01:11.294 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 00:01:11.294682 | orchestrator | 00:01:11.294 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 00:01:11.294713 | orchestrator | 00:01:11.294 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 00:01:11.294744 | orchestrator | 00:01:11.294 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.294774 | orchestrator | 00:01:11.294 STDOUT terraform:  } 2025-06-02 00:01:11.294861 | orchestrator | 00:01:11.294 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-02 00:01:11.294912 | orchestrator | 00:01:11.294 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-02 00:01:11.294943 | orchestrator | 00:01:11.294 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.294979 | orchestrator | 00:01:11.294 STDOUT terraform:  + description = "management security group" 2025-06-02 00:01:11.295010 | orchestrator | 00:01:11.294 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.295040 | orchestrator | 00:01:11.295 STDOUT terraform:  + name = "testbed-management" 2025-06-02 00:01:11.295071 | orchestrator | 00:01:11.295 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.295108 | orchestrator | 00:01:11.295 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 00:01:11.295143 | orchestrator | 00:01:11.295 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.295149 | orchestrator | 00:01:11.295 STDOUT terraform:  } 2025-06-02 00:01:11.295201 | orchestrator | 00:01:11.295 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-02 00:01:11.295253 | orchestrator | 00:01:11.295 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-02 00:01:11.295283 | orchestrator | 00:01:11.295 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.295313 | orchestrator | 00:01:11.295 STDOUT terraform:  + description = "node security group" 2025-06-02 00:01:11.295345 | orchestrator | 00:01:11.295 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.295371 | orchestrator | 00:01:11.295 STDOUT terraform:  + name = "testbed-node" 2025-06-02 00:01:11.295439 | orchestrator | 00:01:11.295 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.295447 | orchestrator | 00:01:11.295 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 00:01:11.295453 | orchestrator | 00:01:11.295 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.295457 | orchestrator | 00:01:11.295 STDOUT terraform:  } 2025-06-02 00:01:11.295504 | orchestrator | 00:01:11.295 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-02 00:01:11.295550 | orchestrator | 00:01:11.295 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-02 00:01:11.295581 | orchestrator | 00:01:11.295 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 00:01:11.295613 | orchestrator | 00:01:11.295 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-02 00:01:11.295633 | orchestrator | 00:01:11.295 STDOUT terraform:  + dns_nameservers = [ 2025-06-02 00:01:11.295639 | orchestrator | 00:01:11.295 STDOUT terraform:  + "8.8.8.8", 2025-06-02 00:01:11.295658 | orchestrator | 00:01:11.295 STDOUT terraform:  + "9.9.9.9", 2025-06-02 00:01:11.295664 | orchestrator | 00:01:11.295 STDOUT terraform:  ] 2025-06-02 00:01:11.295690 | orchestrator | 00:01:11.295 STDOUT terraform:  + enable_dhcp = true 2025-06-02 00:01:11.295721 | orchestrator | 00:01:11.295 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-02 00:01:11.295764 | orchestrator | 00:01:11.295 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.295791 | orchestrator | 00:01:11.295 STDOUT terraform:  + ip_version = 4 2025-06-02 00:01:11.295833 | orchestrator | 00:01:11.295 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-02 00:01:11.295869 | orchestrator | 00:01:11.295 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-02 00:01:11.295908 | orchestrator | 00:01:11.295 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-02 00:01:11.295939 | orchestrator | 00:01:11.295 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 00:01:11.295959 | orchestrator | 00:01:11.295 STDOUT terraform:  + no_gateway = false 2025-06-02 00:01:11.295992 | orchestrator | 00:01:11.295 STDOUT terraform:  + region = (known after apply) 2025-06-02 00:01:11.296023 | orchestrator | 00:01:11.295 STDOUT terraform:  + service_types = (known after apply) 2025-06-02 00:01:11.296054 | orchestrator | 00:01:11.296 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 00:01:11.296074 | orchestrator | 00:01:11.296 STDOUT terraform:  + allocation_pool { 2025-06-02 00:01:11.296093 | orchestrator | 00:01:11.296 STDOUT terraform:  + end = "192.168.31.250" 2025-06-02 00:01:11.296119 | orchestrator | 00:01:11.296 STDOUT terraform:  + start = "192.168.31.200" 2025-06-02 00:01:11.296125 | orchestrator | 00:01:11.296 STDOUT terraform:  } 2025-06-02 00:01:11.296130 | orchestrator | 00:01:11.296 STDOUT terraform:  } 2025-06-02 00:01:11.296163 | orchestrator | 00:01:11.296 STDOUT terraform:  # terraform_data.image will be created 2025-06-02 00:01:11.296209 | orchestrator | 00:01:11.296 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-02 00:01:11.296216 | orchestrator | 00:01:11.296 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.296220 | orchestrator | 00:01:11.296 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 00:01:11.296249 | orchestrator | 00:01:11.296 STDOUT terraform:  + output = (known after apply) 2025-06-02 00:01:11.296255 | orchestrator | 00:01:11.296 STDOUT terraform:  } 2025-06-02 00:01:11.296287 | orchestrator | 00:01:11.296 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-02 00:01:11.296317 | orchestrator | 00:01:11.296 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-02 00:01:11.296342 | orchestrator | 00:01:11.296 STDOUT terraform:  + id = (known after apply) 2025-06-02 00:01:11.296366 | orchestrator | 00:01:11.296 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 00:01:11.296396 | orchestrator | 00:01:11.296 STDOUT terraform:  + output = (known after apply) 2025-06-02 00:01:11.296402 | orchestrator | 00:01:11.296 STDOUT terraform:  } 2025-06-02 00:01:11.296455 | orchestrator | 00:01:11.296 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-02 00:01:11.296474 | orchestrator | 00:01:11.296 STDOUT terraform: Changes to Outputs: 2025-06-02 00:01:11.296514 | orchestrator | 00:01:11.296 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-02 00:01:11.296549 | orchestrator | 00:01:11.296 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 00:01:11.507981 | orchestrator | 00:01:11.507 STDOUT terraform: terraform_data.image: Creating... 2025-06-02 00:01:11.508083 | orchestrator | 00:01:11.507 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=24b85347-7edd-37d5-fc60-87fe4333151b] 2025-06-02 00:01:11.508101 | orchestrator | 00:01:11.507 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-02 00:01:11.508138 | orchestrator | 00:01:11.507 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=050e11d9-fa70-98f2-f5f3-e03b1fca3f63] 2025-06-02 00:01:11.521722 | orchestrator | 00:01:11.521 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-02 00:01:11.522228 | orchestrator | 00:01:11.522 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-02 00:01:11.524946 | orchestrator | 00:01:11.524 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-02 00:01:11.525017 | orchestrator | 00:01:11.524 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-02 00:01:11.525514 | orchestrator | 00:01:11.525 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-02 00:01:11.525653 | orchestrator | 00:01:11.525 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-02 00:01:11.526716 | orchestrator | 00:01:11.526 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-02 00:01:11.526847 | orchestrator | 00:01:11.526 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-02 00:01:11.530495 | orchestrator | 00:01:11.530 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-02 00:01:11.535421 | orchestrator | 00:01:11.535 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-02 00:01:11.990209 | orchestrator | 00:01:11.989 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 00:01:11.990311 | orchestrator | 00:01:11.989 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 00:01:11.998130 | orchestrator | 00:01:11.997 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-02 00:01:12.000004 | orchestrator | 00:01:11.999 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-02 00:01:12.035953 | orchestrator | 00:01:12.035 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-06-02 00:01:12.044003 | orchestrator | 00:01:12.043 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-02 00:01:17.450420 | orchestrator | 00:01:17.449 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 5s [id=8e01cca7-bf60-4931-b05e-923cf8941932] 2025-06-02 00:01:17.463000 | orchestrator | 00:01:17.462 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-02 00:01:21.527575 | orchestrator | 00:01:21.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-02 00:01:21.528071 | orchestrator | 00:01:21.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-02 00:01:21.530682 | orchestrator | 00:01:21.530 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-02 00:01:21.530810 | orchestrator | 00:01:21.530 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-02 00:01:21.535158 | orchestrator | 00:01:21.534 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-02 00:01:21.536145 | orchestrator | 00:01:21.535 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-02 00:01:21.999207 | orchestrator | 00:01:21.998 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-02 00:01:22.000306 | orchestrator | 00:01:22.000 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-02 00:01:22.044961 | orchestrator | 00:01:22.044 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-02 00:01:22.085455 | orchestrator | 00:01:22.085 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=e8887d11-63ae-4566-a11b-b67b45b1443e] 2025-06-02 00:01:22.095199 | orchestrator | 00:01:22.094 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-02 00:01:22.099626 | orchestrator | 00:01:22.099 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=e440bdc3-1867-4817-abfb-a8a36f681931] 2025-06-02 00:01:22.106445 | orchestrator | 00:01:22.106 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-02 00:01:22.114529 | orchestrator | 00:01:22.114 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=ba4d1aaf-78c8-4549-a686-67bb8e50d69d] 2025-06-02 00:01:22.127913 | orchestrator | 00:01:22.127 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-02 00:01:22.131732 | orchestrator | 00:01:22.131 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=fa9fa188-919c-438b-be4f-34a22a00bea2] 2025-06-02 00:01:22.135001 | orchestrator | 00:01:22.134 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=4ace89ec-5901-4181-9aea-4e5d559a0cfd] 2025-06-02 00:01:22.138504 | orchestrator | 00:01:22.138 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-02 00:01:22.139654 | orchestrator | 00:01:22.139 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-02 00:01:22.145215 | orchestrator | 00:01:22.144 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=9dfee06d-65ab-44da-8413-6b371a116172] 2025-06-02 00:01:22.153000 | orchestrator | 00:01:22.152 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-02 00:01:22.190897 | orchestrator | 00:01:22.190 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=ee21d93f-61cf-428c-a6c4-8efe670724e1] 2025-06-02 00:01:22.208312 | orchestrator | 00:01:22.208 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-02 00:01:22.215371 | orchestrator | 00:01:22.215 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=3718d106135d87ef1cb8cf3c66ed8cb53d3827e6] 2025-06-02 00:01:22.227410 | orchestrator | 00:01:22.227 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-02 00:01:22.232268 | orchestrator | 00:01:22.232 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=b95f816a687011b2e540f974dd46a114ec81250a] 2025-06-02 00:01:22.241122 | orchestrator | 00:01:22.240 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-02 00:01:22.241844 | orchestrator | 00:01:22.241 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=bbd34322-9953-4267-815d-84376d8605a5] 2025-06-02 00:01:22.245241 | orchestrator | 00:01:22.245 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=0e09092a-0107-49ca-ae5a-eacfcf6197eb] 2025-06-02 00:01:27.467100 | orchestrator | 00:01:27.466 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 00:01:27.764168 | orchestrator | 00:01:27.763 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=d1a2bbbe-9362-43b3-96f2-dcfed4bebf74] 2025-06-02 00:01:28.584158 | orchestrator | 00:01:28.583 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 7s [id=3129abe2-e739-49e4-bec7-ec325543ca9a] 2025-06-02 00:01:28.594003 | orchestrator | 00:01:28.593 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-02 00:01:32.096474 | orchestrator | 00:01:32.096 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-02 00:01:32.107603 | orchestrator | 00:01:32.107 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-02 00:01:32.129100 | orchestrator | 00:01:32.128 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-02 00:01:32.139941 | orchestrator | 00:01:32.139 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-02 00:01:32.140593 | orchestrator | 00:01:32.140 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-02 00:01:32.154234 | orchestrator | 00:01:32.153 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 00:01:32.424852 | orchestrator | 00:01:32.424 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=f5e0c2db-a039-40a9-94ad-8a36749fe93f] 2025-06-02 00:01:32.472681 | orchestrator | 00:01:32.472 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=24a30ba4-1f7c-48df-a98c-7d1e4021ab04] 2025-06-02 00:01:32.506966 | orchestrator | 00:01:32.506 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=9e8c5ff5-c57d-4c08-96dd-e9836efdc119] 2025-06-02 00:01:32.528183 | orchestrator | 00:01:32.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=9a277547-9c69-455c-ba34-e403b5f8d4c7] 2025-06-02 00:01:32.536151 | orchestrator | 00:01:32.535 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=f7d66c7f-a947-461e-903e-2cb9e3d050c5] 2025-06-02 00:01:32.538837 | orchestrator | 00:01:32.538 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=d9278158-5488-4924-af7c-e9a9bf543d8d] 2025-06-02 00:01:37.869385 | orchestrator | 00:01:37.868 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 9s [id=d97856f3-5871-4fe6-86e0-8c2a6228f696] 2025-06-02 00:01:37.875503 | orchestrator | 00:01:37.875 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-02 00:01:37.875863 | orchestrator | 00:01:37.875 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-02 00:01:37.878302 | orchestrator | 00:01:37.878 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-02 00:01:38.055007 | orchestrator | 00:01:38.054 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=0fa6a85c-f061-4162-9f20-95fc4b3c9f31] 2025-06-02 00:01:38.068436 | orchestrator | 00:01:38.068 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-02 00:01:38.068503 | orchestrator | 00:01:38.068 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-02 00:01:38.068923 | orchestrator | 00:01:38.068 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-02 00:01:38.069365 | orchestrator | 00:01:38.069 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-02 00:01:38.070093 | orchestrator | 00:01:38.069 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-02 00:01:38.070535 | orchestrator | 00:01:38.070 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=47f794a1-1728-475e-b5b6-39a8ee629f2d] 2025-06-02 00:01:38.078417 | orchestrator | 00:01:38.078 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-02 00:01:38.078923 | orchestrator | 00:01:38.078 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-02 00:01:38.080100 | orchestrator | 00:01:38.079 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-02 00:01:38.080708 | orchestrator | 00:01:38.080 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-02 00:01:38.266867 | orchestrator | 00:01:38.266 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=9351d05a-4c53-417d-8fd6-ed697662e649] 2025-06-02 00:01:38.275505 | orchestrator | 00:01:38.275 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-02 00:01:38.291009 | orchestrator | 00:01:38.290 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=1948edd8-e5e4-40f9-967c-f68b4ad9bd8e] 2025-06-02 00:01:38.308245 | orchestrator | 00:01:38.307 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-02 00:01:38.488243 | orchestrator | 00:01:38.487 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=1229a8a2-92f4-437c-8aed-10968eadd743] 2025-06-02 00:01:38.500221 | orchestrator | 00:01:38.499 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=6daf6a54-6f43-4218-896d-fec22be8ab3c] 2025-06-02 00:01:38.511283 | orchestrator | 00:01:38.510 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-02 00:01:38.511564 | orchestrator | 00:01:38.511 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-02 00:01:38.666385 | orchestrator | 00:01:38.665 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=454947f5-5883-4c10-b626-4cc088f79561] 2025-06-02 00:01:38.681613 | orchestrator | 00:01:38.681 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-02 00:01:38.687795 | orchestrator | 00:01:38.687 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=3660b036-e0b8-4b19-babe-2ca2cf580dfa] 2025-06-02 00:01:38.703885 | orchestrator | 00:01:38.703 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-02 00:01:38.838428 | orchestrator | 00:01:38.837 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=735d6c3d-24c2-49d2-8441-6a1bd6b286c2] 2025-06-02 00:01:38.856950 | orchestrator | 00:01:38.856 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-02 00:01:39.014976 | orchestrator | 00:01:39.014 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=6246253c-b0fc-41a2-b1fc-ff4b97dc8cda] 2025-06-02 00:01:39.239243 | orchestrator | 00:01:39.238 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=c17d26e0-6078-4f4f-996e-67618dc79f78] 2025-06-02 00:01:43.707934 | orchestrator | 00:01:43.707 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=8cf5d7a8-1702-4d98-b331-f75cc7889911] 2025-06-02 00:01:44.082809 | orchestrator | 00:01:44.082 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=d0b04ae8-6fb6-4851-bb44-abfa89bb64b5] 2025-06-02 00:01:44.268960 | orchestrator | 00:01:44.268 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=18c9061f-8864-4822-ad2a-d6fe972e306b] 2025-06-02 00:01:44.271145 | orchestrator | 00:01:44.270 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=f832cf9b-a7ba-4816-bd2e-f0a44663791d] 2025-06-02 00:01:44.323695 | orchestrator | 00:01:44.323 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 5s [id=fbcf522b-d160-4b05-943d-25d5083a9185] 2025-06-02 00:01:44.362618 | orchestrator | 00:01:44.362 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=7a1fb132-248a-42ff-bd54-b74f9848f96a] 2025-06-02 00:01:44.501081 | orchestrator | 00:01:44.499 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=811287d4-abe9-434d-9f64-1ee7a0bcda0d] 2025-06-02 00:01:45.424820 | orchestrator | 00:01:45.424 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=8afd6fe5-eb14-489e-9bbf-3d4f5da7a713] 2025-06-02 00:01:45.456820 | orchestrator | 00:01:45.456 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-02 00:01:45.459411 | orchestrator | 00:01:45.459 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-02 00:01:45.470704 | orchestrator | 00:01:45.470 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-02 00:01:45.471942 | orchestrator | 00:01:45.471 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-02 00:01:45.474298 | orchestrator | 00:01:45.474 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-02 00:01:45.486355 | orchestrator | 00:01:45.486 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-02 00:01:45.487621 | orchestrator | 00:01:45.487 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-02 00:01:51.923914 | orchestrator | 00:01:51.923 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=eec02a97-e10d-4bb8-bd44-8c3d20840001] 2025-06-02 00:01:51.934333 | orchestrator | 00:01:51.934 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-02 00:01:51.937915 | orchestrator | 00:01:51.937 STDOUT terraform: local_file.inventory: Creating... 2025-06-02 00:01:51.944078 | orchestrator | 00:01:51.943 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=7acc569e464a6fc267ebbb86adf37ff084cf9fd7] 2025-06-02 00:01:51.945368 | orchestrator | 00:01:51.945 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-02 00:01:51.953572 | orchestrator | 00:01:51.953 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=1f4fed90a6302371aa9369396e13d234c2852995] 2025-06-02 00:01:52.696274 | orchestrator | 00:01:52.695 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=eec02a97-e10d-4bb8-bd44-8c3d20840001] 2025-06-02 00:01:55.460355 | orchestrator | 00:01:55.459 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-02 00:01:55.474663 | orchestrator | 00:01:55.474 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-02 00:01:55.474794 | orchestrator | 00:01:55.474 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-02 00:01:55.475608 | orchestrator | 00:01:55.475 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-02 00:01:55.488377 | orchestrator | 00:01:55.487 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-02 00:01:55.488514 | orchestrator | 00:01:55.488 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-02 00:02:05.460951 | orchestrator | 00:02:05.460 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-02 00:02:05.475354 | orchestrator | 00:02:05.475 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-02 00:02:05.475435 | orchestrator | 00:02:05.475 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-02 00:02:05.476338 | orchestrator | 00:02:05.476 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-02 00:02:05.488816 | orchestrator | 00:02:05.488 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-02 00:02:05.489021 | orchestrator | 00:02:05.488 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-02 00:02:05.824191 | orchestrator | 00:02:05.823 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=f56c6697-b3c5-4f24-9ed5-3b569573528d] 2025-06-02 00:02:05.914090 | orchestrator | 00:02:05.913 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=e744c0c1-a9c0-4d66-ba52-4f46a3f7cdf5] 2025-06-02 00:02:05.914899 | orchestrator | 00:02:05.914 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=b327337c-7426-4435-8919-8e6f51a58ea8] 2025-06-02 00:02:05.973737 | orchestrator | 00:02:05.973 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=24ca6b12-771a-4fc8-939a-62fa159e9f49] 2025-06-02 00:02:15.475796 | orchestrator | 00:02:15.475 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-06-02 00:02:15.475978 | orchestrator | 00:02:15.475 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-06-02 00:02:16.025361 | orchestrator | 00:02:16.025 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=dc98e0c1-0987-4ee1-b36b-0a091f639283] 2025-06-02 00:02:16.202057 | orchestrator | 00:02:16.201 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=19076117-ffec-4d51-a929-6c12d2757053] 2025-06-02 00:02:16.218343 | orchestrator | 00:02:16.218 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-02 00:02:16.223498 | orchestrator | 00:02:16.223 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2697898041580997258] 2025-06-02 00:02:16.226059 | orchestrator | 00:02:16.225 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-02 00:02:16.234288 | orchestrator | 00:02:16.234 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-02 00:02:16.235114 | orchestrator | 00:02:16.234 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-02 00:02:16.250256 | orchestrator | 00:02:16.250 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-02 00:02:16.253526 | orchestrator | 00:02:16.253 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-02 00:02:16.256466 | orchestrator | 00:02:16.256 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-02 00:02:16.256582 | orchestrator | 00:02:16.256 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-02 00:02:16.256846 | orchestrator | 00:02:16.256 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-02 00:02:16.260231 | orchestrator | 00:02:16.260 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-02 00:02:16.263708 | orchestrator | 00:02:16.263 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-02 00:02:21.919629 | orchestrator | 00:02:21.919 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=e744c0c1-a9c0-4d66-ba52-4f46a3f7cdf5/fa9fa188-919c-438b-be4f-34a22a00bea2] 2025-06-02 00:02:21.945430 | orchestrator | 00:02:21.945 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=dc98e0c1-0987-4ee1-b36b-0a091f639283/0e09092a-0107-49ca-ae5a-eacfcf6197eb] 2025-06-02 00:02:21.957485 | orchestrator | 00:02:21.957 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=f56c6697-b3c5-4f24-9ed5-3b569573528d/bbd34322-9953-4267-815d-84376d8605a5] 2025-06-02 00:02:21.978049 | orchestrator | 00:02:21.977 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=e744c0c1-a9c0-4d66-ba52-4f46a3f7cdf5/ee21d93f-61cf-428c-a6c4-8efe670724e1] 2025-06-02 00:02:21.983057 | orchestrator | 00:02:21.982 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=dc98e0c1-0987-4ee1-b36b-0a091f639283/9dfee06d-65ab-44da-8413-6b371a116172] 2025-06-02 00:02:22.017462 | orchestrator | 00:02:22.017 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=e744c0c1-a9c0-4d66-ba52-4f46a3f7cdf5/e440bdc3-1867-4817-abfb-a8a36f681931] 2025-06-02 00:02:22.061194 | orchestrator | 00:02:22.060 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=f56c6697-b3c5-4f24-9ed5-3b569573528d/ba4d1aaf-78c8-4549-a686-67bb8e50d69d] 2025-06-02 00:02:22.073573 | orchestrator | 00:02:22.073 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=dc98e0c1-0987-4ee1-b36b-0a091f639283/e8887d11-63ae-4566-a11b-b67b45b1443e] 2025-06-02 00:02:22.249742 | orchestrator | 00:02:22.249 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=f56c6697-b3c5-4f24-9ed5-3b569573528d/4ace89ec-5901-4181-9aea-4e5d559a0cfd] 2025-06-02 00:02:26.269643 | orchestrator | 00:02:26.269 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-02 00:02:36.269759 | orchestrator | 00:02:36.269 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-02 00:02:36.603985 | orchestrator | 00:02:36.603 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=3f62a439-7ef8-4f6d-b608-dc9ab9270a35] 2025-06-02 00:02:36.631006 | orchestrator | 00:02:36.630 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-02 00:02:36.631123 | orchestrator | 00:02:36.630 STDOUT terraform: Outputs: 2025-06-02 00:02:36.631144 | orchestrator | 00:02:36.630 STDOUT terraform: manager_address = 2025-06-02 00:02:36.631160 | orchestrator | 00:02:36.631 STDOUT terraform: private_key = 2025-06-02 00:02:36.721181 | orchestrator | ok: Runtime: 0:01:35.991298 2025-06-02 00:02:36.756134 | 2025-06-02 00:02:36.756336 | TASK [Fetch manager address] 2025-06-02 00:02:37.315809 | orchestrator | ok 2025-06-02 00:02:37.324328 | 2025-06-02 00:02:37.324455 | TASK [Set manager_host address] 2025-06-02 00:02:37.401229 | orchestrator | ok 2025-06-02 00:02:37.409286 | 2025-06-02 00:02:37.409443 | LOOP [Update ansible collections] 2025-06-02 00:02:38.354440 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 00:02:38.354770 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 00:02:38.354813 | orchestrator | Starting galaxy collection install process 2025-06-02 00:02:38.354886 | orchestrator | Process install dependency map 2025-06-02 00:02:38.354910 | orchestrator | Starting collection install process 2025-06-02 00:02:38.354932 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2025-06-02 00:02:38.354956 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2025-06-02 00:02:38.354981 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-02 00:02:38.355033 | orchestrator | ok: Item: commons Runtime: 0:00:00.579058 2025-06-02 00:02:39.241668 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 00:02:39.241896 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 00:02:39.241957 | orchestrator | Starting galaxy collection install process 2025-06-02 00:02:39.242002 | orchestrator | Process install dependency map 2025-06-02 00:02:39.242045 | orchestrator | Starting collection install process 2025-06-02 00:02:39.242084 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2025-06-02 00:02:39.242124 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2025-06-02 00:02:39.242161 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-02 00:02:39.242220 | orchestrator | ok: Item: services Runtime: 0:00:00.604366 2025-06-02 00:02:39.259937 | 2025-06-02 00:02:39.260190 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 00:02:51.111320 | orchestrator | ok 2025-06-02 00:02:51.125925 | 2025-06-02 00:02:51.126111 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 00:03:51.178895 | orchestrator | ok 2025-06-02 00:03:51.193170 | 2025-06-02 00:03:51.193323 | TASK [Fetch manager ssh hostkey] 2025-06-02 00:03:52.819711 | orchestrator | Output suppressed because no_log was given 2025-06-02 00:03:52.828007 | 2025-06-02 00:03:52.828150 | TASK [Get ssh keypair from terraform environment] 2025-06-02 00:03:53.372555 | orchestrator | ok: Runtime: 0:00:00.010013 2025-06-02 00:03:53.384672 | 2025-06-02 00:03:53.384843 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 00:03:53.434198 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-02 00:03:53.443946 | 2025-06-02 00:03:53.444084 | TASK [Run manager part 0] 2025-06-02 00:03:54.434644 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 00:03:54.524063 | orchestrator | 2025-06-02 00:03:54.524126 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-02 00:03:54.524137 | orchestrator | 2025-06-02 00:03:54.524155 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-02 00:03:56.621157 | orchestrator | ok: [testbed-manager] 2025-06-02 00:03:56.621208 | orchestrator | 2025-06-02 00:03:56.621232 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 00:03:56.621243 | orchestrator | 2025-06-02 00:03:56.621254 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 00:03:58.715166 | orchestrator | ok: [testbed-manager] 2025-06-02 00:03:58.715233 | orchestrator | 2025-06-02 00:03:58.715245 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 00:03:59.396801 | orchestrator | ok: [testbed-manager] 2025-06-02 00:03:59.396916 | orchestrator | 2025-06-02 00:03:59.396930 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 00:03:59.444725 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:03:59.444793 | orchestrator | 2025-06-02 00:03:59.444804 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-02 00:03:59.470626 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:03:59.470693 | orchestrator | 2025-06-02 00:03:59.470706 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 00:03:59.497162 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:03:59.497192 | orchestrator | 2025-06-02 00:03:59.497198 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 00:03:59.525711 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:03:59.525792 | orchestrator | 2025-06-02 00:03:59.525807 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 00:03:59.552390 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:03:59.552432 | orchestrator | 2025-06-02 00:03:59.552439 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-02 00:03:59.578839 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:03:59.578876 | orchestrator | 2025-06-02 00:03:59.578885 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-02 00:03:59.606540 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:03:59.606575 | orchestrator | 2025-06-02 00:03:59.606581 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-02 00:04:00.406654 | orchestrator | changed: [testbed-manager] 2025-06-02 00:04:00.406720 | orchestrator | 2025-06-02 00:04:00.406732 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-02 00:07:04.021508 | orchestrator | changed: [testbed-manager] 2025-06-02 00:07:04.021583 | orchestrator | 2025-06-02 00:07:04.021600 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 00:08:38.410617 | orchestrator | changed: [testbed-manager] 2025-06-02 00:08:38.410737 | orchestrator | 2025-06-02 00:08:38.410757 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 00:08:59.739594 | orchestrator | changed: [testbed-manager] 2025-06-02 00:08:59.739711 | orchestrator | 2025-06-02 00:08:59.739733 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 00:09:09.263521 | orchestrator | changed: [testbed-manager] 2025-06-02 00:09:09.263640 | orchestrator | 2025-06-02 00:09:09.263658 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 00:09:09.314258 | orchestrator | ok: [testbed-manager] 2025-06-02 00:09:09.314334 | orchestrator | 2025-06-02 00:09:09.314348 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-02 00:09:10.042653 | orchestrator | ok: [testbed-manager] 2025-06-02 00:09:10.042736 | orchestrator | 2025-06-02 00:09:10.042754 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-02 00:09:10.730260 | orchestrator | changed: [testbed-manager] 2025-06-02 00:09:10.730342 | orchestrator | 2025-06-02 00:09:10.730358 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-02 00:09:16.987427 | orchestrator | changed: [testbed-manager] 2025-06-02 00:09:16.987476 | orchestrator | 2025-06-02 00:09:16.987501 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-02 00:09:22.737077 | orchestrator | changed: [testbed-manager] 2025-06-02 00:09:22.737171 | orchestrator | 2025-06-02 00:09:22.737189 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-02 00:09:25.214641 | orchestrator | changed: [testbed-manager] 2025-06-02 00:09:25.214688 | orchestrator | 2025-06-02 00:09:25.214697 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-02 00:09:26.922912 | orchestrator | changed: [testbed-manager] 2025-06-02 00:09:26.922997 | orchestrator | 2025-06-02 00:09:26.923012 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-02 00:09:28.070930 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 00:09:28.071022 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 00:09:28.071038 | orchestrator | 2025-06-02 00:09:28.071052 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-02 00:09:28.112678 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 00:09:28.112766 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 00:09:28.112780 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 00:09:28.112793 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 00:09:31.663690 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 00:09:31.663785 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 00:09:31.663801 | orchestrator | 2025-06-02 00:09:31.663813 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-02 00:09:32.214320 | orchestrator | changed: [testbed-manager] 2025-06-02 00:09:32.214409 | orchestrator | 2025-06-02 00:09:32.214426 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-02 00:13:52.647991 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-02 00:13:52.648104 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-02 00:13:52.648127 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-02 00:13:52.648141 | orchestrator | 2025-06-02 00:13:52.648154 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-02 00:13:54.960472 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-02 00:13:54.960521 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-02 00:13:54.960531 | orchestrator | 2025-06-02 00:13:54.960541 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-02 00:13:54.960550 | orchestrator | 2025-06-02 00:13:54.960558 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 00:13:56.351773 | orchestrator | ok: [testbed-manager] 2025-06-02 00:13:56.351807 | orchestrator | 2025-06-02 00:13:56.351814 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 00:13:56.400276 | orchestrator | ok: [testbed-manager] 2025-06-02 00:13:56.400317 | orchestrator | 2025-06-02 00:13:56.400353 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 00:13:56.460632 | orchestrator | ok: [testbed-manager] 2025-06-02 00:13:56.460674 | orchestrator | 2025-06-02 00:13:56.460683 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 00:13:57.225735 | orchestrator | changed: [testbed-manager] 2025-06-02 00:13:57.225826 | orchestrator | 2025-06-02 00:13:57.225844 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 00:13:57.935392 | orchestrator | changed: [testbed-manager] 2025-06-02 00:13:57.936128 | orchestrator | 2025-06-02 00:13:57.936154 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 00:13:59.341706 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-02 00:13:59.341793 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-02 00:13:59.341810 | orchestrator | 2025-06-02 00:13:59.341838 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 00:14:00.718011 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:00.718168 | orchestrator | 2025-06-02 00:14:00.718196 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 00:14:02.391661 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 00:14:02.391752 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-02 00:14:02.391767 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-02 00:14:02.391779 | orchestrator | 2025-06-02 00:14:02.391792 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 00:14:02.945873 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:02.946477 | orchestrator | 2025-06-02 00:14:02.946507 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 00:14:03.013323 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:14:03.013391 | orchestrator | 2025-06-02 00:14:03.013400 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 00:14:03.874813 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 00:14:03.874880 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:03.874896 | orchestrator | 2025-06-02 00:14:03.874909 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 00:14:03.915445 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:14:03.915529 | orchestrator | 2025-06-02 00:14:03.915555 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 00:14:03.949986 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:14:03.950081 | orchestrator | 2025-06-02 00:14:03.950096 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 00:14:03.986338 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:14:03.986457 | orchestrator | 2025-06-02 00:14:03.986481 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 00:14:04.040632 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:14:04.040692 | orchestrator | 2025-06-02 00:14:04.040705 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 00:14:04.769503 | orchestrator | ok: [testbed-manager] 2025-06-02 00:14:04.769568 | orchestrator | 2025-06-02 00:14:04.769584 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 00:14:04.769597 | orchestrator | 2025-06-02 00:14:04.769611 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 00:14:06.159526 | orchestrator | ok: [testbed-manager] 2025-06-02 00:14:06.159593 | orchestrator | 2025-06-02 00:14:06.159609 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-02 00:14:07.101594 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:07.101681 | orchestrator | 2025-06-02 00:14:07.101700 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:14:07.101715 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 00:14:07.101730 | orchestrator | 2025-06-02 00:14:07.356723 | orchestrator | ok: Runtime: 0:10:13.466755 2025-06-02 00:14:07.375666 | 2025-06-02 00:14:07.375872 | TASK [Point out that the log in on the manager is now possible] 2025-06-02 00:14:07.426202 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-02 00:14:07.436804 | 2025-06-02 00:14:07.436960 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 00:14:07.487094 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-02 00:14:07.497356 | 2025-06-02 00:14:07.497502 | TASK [Run manager part 1 + 2] 2025-06-02 00:14:08.320913 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 00:14:08.373999 | orchestrator | 2025-06-02 00:14:08.374073 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-02 00:14:08.374081 | orchestrator | 2025-06-02 00:14:08.374094 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 00:14:11.235289 | orchestrator | ok: [testbed-manager] 2025-06-02 00:14:11.235342 | orchestrator | 2025-06-02 00:14:11.235364 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 00:14:11.270367 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:14:11.270468 | orchestrator | 2025-06-02 00:14:11.270479 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 00:14:11.308800 | orchestrator | ok: [testbed-manager] 2025-06-02 00:14:11.308848 | orchestrator | 2025-06-02 00:14:11.308858 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 00:14:11.348117 | orchestrator | ok: [testbed-manager] 2025-06-02 00:14:11.348165 | orchestrator | 2025-06-02 00:14:11.348176 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 00:14:11.417315 | orchestrator | ok: [testbed-manager] 2025-06-02 00:14:11.417369 | orchestrator | 2025-06-02 00:14:11.417437 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 00:14:11.475309 | orchestrator | ok: [testbed-manager] 2025-06-02 00:14:11.475363 | orchestrator | 2025-06-02 00:14:11.475397 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 00:14:11.517424 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-02 00:14:11.517472 | orchestrator | 2025-06-02 00:14:11.517479 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 00:14:12.187188 | orchestrator | ok: [testbed-manager] 2025-06-02 00:14:12.187249 | orchestrator | 2025-06-02 00:14:12.187261 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 00:14:12.234743 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:14:12.234799 | orchestrator | 2025-06-02 00:14:12.234809 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 00:14:13.573002 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:13.573061 | orchestrator | 2025-06-02 00:14:13.573073 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 00:14:14.158582 | orchestrator | ok: [testbed-manager] 2025-06-02 00:14:14.158639 | orchestrator | 2025-06-02 00:14:14.158648 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 00:14:15.324480 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:15.324533 | orchestrator | 2025-06-02 00:14:15.324543 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 00:14:27.239308 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:27.239367 | orchestrator | 2025-06-02 00:14:27.239375 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 00:14:27.905883 | orchestrator | ok: [testbed-manager] 2025-06-02 00:14:27.905944 | orchestrator | 2025-06-02 00:14:27.905956 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 00:14:27.960939 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:14:27.960989 | orchestrator | 2025-06-02 00:14:27.960998 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-02 00:14:28.905124 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:28.905192 | orchestrator | 2025-06-02 00:14:28.905207 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-02 00:14:29.838718 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:29.838796 | orchestrator | 2025-06-02 00:14:29.838813 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-02 00:14:30.392130 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:30.392169 | orchestrator | 2025-06-02 00:14:30.392178 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-02 00:14:30.432598 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 00:14:30.432717 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 00:14:30.432738 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 00:14:30.432750 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 00:14:33.264172 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:33.264364 | orchestrator | 2025-06-02 00:14:33.264386 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-02 00:14:42.417894 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-02 00:14:42.417993 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-02 00:14:42.418006 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-02 00:14:42.418090 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-02 00:14:42.418117 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-02 00:14:42.418130 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-02 00:14:42.418143 | orchestrator | 2025-06-02 00:14:42.418155 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-02 00:14:43.435830 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:43.435918 | orchestrator | 2025-06-02 00:14:43.435935 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-02 00:14:43.476207 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:14:43.476257 | orchestrator | 2025-06-02 00:14:43.476266 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-02 00:14:46.404074 | orchestrator | changed: [testbed-manager] 2025-06-02 00:14:46.404168 | orchestrator | 2025-06-02 00:14:46.404185 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-02 00:14:46.447296 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:14:46.447355 | orchestrator | 2025-06-02 00:14:46.447364 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-02 00:16:19.946999 | orchestrator | changed: [testbed-manager] 2025-06-02 00:16:19.947106 | orchestrator | 2025-06-02 00:16:19.947126 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 00:16:21.011506 | orchestrator | ok: [testbed-manager] 2025-06-02 00:16:21.011615 | orchestrator | 2025-06-02 00:16:21.011633 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:16:21.011647 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-02 00:16:21.011659 | orchestrator | 2025-06-02 00:16:21.149533 | orchestrator | ok: Runtime: 0:02:13.301320 2025-06-02 00:16:21.159003 | 2025-06-02 00:16:21.159123 | TASK [Reboot manager] 2025-06-02 00:16:22.694967 | orchestrator | ok: Runtime: 0:00:00.947405 2025-06-02 00:16:22.709331 | 2025-06-02 00:16:22.709488 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 00:16:37.128566 | orchestrator | ok 2025-06-02 00:16:37.139104 | 2025-06-02 00:16:37.139271 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 00:17:37.184275 | orchestrator | ok 2025-06-02 00:17:37.194075 | 2025-06-02 00:17:37.194204 | TASK [Deploy manager + bootstrap nodes] 2025-06-02 00:17:39.591255 | orchestrator | 2025-06-02 00:17:39.591430 | orchestrator | # DEPLOY MANAGER 2025-06-02 00:17:39.591454 | orchestrator | 2025-06-02 00:17:39.591468 | orchestrator | + set -e 2025-06-02 00:17:39.591481 | orchestrator | + echo 2025-06-02 00:17:39.591495 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-02 00:17:39.591513 | orchestrator | + echo 2025-06-02 00:17:39.591562 | orchestrator | + cat /opt/manager-vars.sh 2025-06-02 00:17:39.594612 | orchestrator | export NUMBER_OF_NODES=6 2025-06-02 00:17:39.594685 | orchestrator | 2025-06-02 00:17:39.594709 | orchestrator | export CEPH_VERSION=reef 2025-06-02 00:17:39.594730 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-02 00:17:39.594750 | orchestrator | export MANAGER_VERSION=9.1.0 2025-06-02 00:17:39.594787 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-02 00:17:39.594807 | orchestrator | 2025-06-02 00:17:39.594836 | orchestrator | export ARA=false 2025-06-02 00:17:39.594857 | orchestrator | export DEPLOY_MODE=manager 2025-06-02 00:17:39.594884 | orchestrator | export TEMPEST=false 2025-06-02 00:17:39.594896 | orchestrator | export IS_ZUUL=true 2025-06-02 00:17:39.594908 | orchestrator | 2025-06-02 00:17:39.594965 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2025-06-02 00:17:39.594979 | orchestrator | export EXTERNAL_API=false 2025-06-02 00:17:39.594990 | orchestrator | 2025-06-02 00:17:39.595001 | orchestrator | export IMAGE_USER=ubuntu 2025-06-02 00:17:39.595015 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-02 00:17:39.595026 | orchestrator | 2025-06-02 00:17:39.595036 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-02 00:17:39.595056 | orchestrator | 2025-06-02 00:17:39.595067 | orchestrator | + echo 2025-06-02 00:17:39.595079 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 00:17:39.595881 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 00:17:39.595915 | orchestrator | ++ INTERACTIVE=false 2025-06-02 00:17:39.595969 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 00:17:39.595989 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 00:17:39.596248 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 00:17:39.596280 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 00:17:39.596302 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 00:17:39.596323 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 00:17:39.596343 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 00:17:39.596363 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 00:17:39.596383 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 00:17:39.596398 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 00:17:39.596409 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 00:17:39.596419 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 00:17:39.596442 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 00:17:39.596453 | orchestrator | ++ export ARA=false 2025-06-02 00:17:39.596464 | orchestrator | ++ ARA=false 2025-06-02 00:17:39.596475 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 00:17:39.596486 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 00:17:39.596496 | orchestrator | ++ export TEMPEST=false 2025-06-02 00:17:39.596507 | orchestrator | ++ TEMPEST=false 2025-06-02 00:17:39.596523 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 00:17:39.596534 | orchestrator | ++ IS_ZUUL=true 2025-06-02 00:17:39.596545 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2025-06-02 00:17:39.596556 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2025-06-02 00:17:39.596567 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 00:17:39.596578 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 00:17:39.596588 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 00:17:39.596599 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 00:17:39.596610 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 00:17:39.596620 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 00:17:39.596631 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 00:17:39.596642 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 00:17:39.596653 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-02 00:17:39.658333 | orchestrator | + docker version 2025-06-02 00:17:39.926556 | orchestrator | Client: Docker Engine - Community 2025-06-02 00:17:39.926663 | orchestrator | Version: 27.5.1 2025-06-02 00:17:39.926681 | orchestrator | API version: 1.47 2025-06-02 00:17:39.926693 | orchestrator | Go version: go1.22.11 2025-06-02 00:17:39.926704 | orchestrator | Git commit: 9f9e405 2025-06-02 00:17:39.926715 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 00:17:39.926726 | orchestrator | OS/Arch: linux/amd64 2025-06-02 00:17:39.926737 | orchestrator | Context: default 2025-06-02 00:17:39.926748 | orchestrator | 2025-06-02 00:17:39.926759 | orchestrator | Server: Docker Engine - Community 2025-06-02 00:17:39.926770 | orchestrator | Engine: 2025-06-02 00:17:39.926794 | orchestrator | Version: 27.5.1 2025-06-02 00:17:39.926805 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-02 00:17:39.926847 | orchestrator | Go version: go1.22.11 2025-06-02 00:17:39.926858 | orchestrator | Git commit: 4c9b3b0 2025-06-02 00:17:39.926869 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 00:17:39.926880 | orchestrator | OS/Arch: linux/amd64 2025-06-02 00:17:39.926890 | orchestrator | Experimental: false 2025-06-02 00:17:39.926901 | orchestrator | containerd: 2025-06-02 00:17:39.926912 | orchestrator | Version: 1.7.27 2025-06-02 00:17:39.926949 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-02 00:17:39.926961 | orchestrator | runc: 2025-06-02 00:17:39.926973 | orchestrator | Version: 1.2.5 2025-06-02 00:17:39.926983 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-02 00:17:39.926994 | orchestrator | docker-init: 2025-06-02 00:17:39.927005 | orchestrator | Version: 0.19.0 2025-06-02 00:17:39.927016 | orchestrator | GitCommit: de40ad0 2025-06-02 00:17:39.930419 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-02 00:17:39.940083 | orchestrator | + set -e 2025-06-02 00:17:39.940138 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 00:17:39.940223 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 00:17:39.940240 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 00:17:39.940251 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 00:17:39.940262 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 00:17:39.940281 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 00:17:39.940292 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 00:17:39.940303 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 00:17:39.940314 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 00:17:39.940325 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 00:17:39.940343 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 00:17:39.940355 | orchestrator | ++ export ARA=false 2025-06-02 00:17:39.940366 | orchestrator | ++ ARA=false 2025-06-02 00:17:39.940377 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 00:17:39.940387 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 00:17:39.940398 | orchestrator | ++ export TEMPEST=false 2025-06-02 00:17:39.940409 | orchestrator | ++ TEMPEST=false 2025-06-02 00:17:39.940419 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 00:17:39.940430 | orchestrator | ++ IS_ZUUL=true 2025-06-02 00:17:39.940441 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2025-06-02 00:17:39.940452 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2025-06-02 00:17:39.940462 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 00:17:39.940473 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 00:17:39.940484 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 00:17:39.940494 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 00:17:39.940505 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 00:17:39.940516 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 00:17:39.940526 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 00:17:39.940537 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 00:17:39.940548 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 00:17:39.940558 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 00:17:39.940569 | orchestrator | ++ INTERACTIVE=false 2025-06-02 00:17:39.940581 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 00:17:39.940607 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 00:17:39.940799 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-02 00:17:39.940818 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-06-02 00:17:39.948084 | orchestrator | + set -e 2025-06-02 00:17:39.948145 | orchestrator | + VERSION=9.1.0 2025-06-02 00:17:39.948168 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-06-02 00:17:39.954973 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-02 00:17:39.955020 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-02 00:17:39.959807 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-02 00:17:39.963973 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-06-02 00:17:39.972485 | orchestrator | /opt/configuration ~ 2025-06-02 00:17:39.972521 | orchestrator | + set -e 2025-06-02 00:17:39.972532 | orchestrator | + pushd /opt/configuration 2025-06-02 00:17:39.972543 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 00:17:39.975270 | orchestrator | + source /opt/venv/bin/activate 2025-06-02 00:17:39.976097 | orchestrator | ++ deactivate nondestructive 2025-06-02 00:17:39.976143 | orchestrator | ++ '[' -n '' ']' 2025-06-02 00:17:39.976165 | orchestrator | ++ '[' -n '' ']' 2025-06-02 00:17:39.976216 | orchestrator | ++ hash -r 2025-06-02 00:17:39.976254 | orchestrator | ++ '[' -n '' ']' 2025-06-02 00:17:39.976273 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-02 00:17:39.976312 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-02 00:17:39.976340 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-02 00:17:39.976366 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-02 00:17:39.976385 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-02 00:17:39.976402 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-02 00:17:39.976421 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-02 00:17:39.976440 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 00:17:39.976475 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 00:17:39.976502 | orchestrator | ++ export PATH 2025-06-02 00:17:39.976536 | orchestrator | ++ '[' -n '' ']' 2025-06-02 00:17:39.976554 | orchestrator | ++ '[' -z '' ']' 2025-06-02 00:17:39.976572 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-02 00:17:39.976659 | orchestrator | ++ PS1='(venv) ' 2025-06-02 00:17:39.976684 | orchestrator | ++ export PS1 2025-06-02 00:17:39.976726 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-02 00:17:39.976746 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-02 00:17:39.976771 | orchestrator | ++ hash -r 2025-06-02 00:17:39.976914 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-06-02 00:17:40.940218 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-06-02 00:17:40.941495 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-06-02 00:17:40.942255 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-06-02 00:17:40.943419 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-06-02 00:17:40.944547 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-06-02 00:17:40.954421 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-06-02 00:17:40.955585 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-06-02 00:17:40.956558 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-06-02 00:17:40.957981 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-06-02 00:17:40.985752 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-06-02 00:17:40.987066 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-06-02 00:17:40.988556 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-06-02 00:17:40.989905 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-06-02 00:17:40.993866 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-06-02 00:17:41.184859 | orchestrator | ++ which gilt 2025-06-02 00:17:41.188465 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-06-02 00:17:41.188516 | orchestrator | + /opt/venv/bin/gilt overlay 2025-06-02 00:17:41.432303 | orchestrator | osism.cfg-generics: 2025-06-02 00:17:41.601529 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-06-02 00:17:41.603337 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-06-02 00:17:41.603368 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-06-02 00:17:41.603383 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-06-02 00:17:42.311437 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-06-02 00:17:42.321497 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-06-02 00:17:42.622608 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-06-02 00:17:42.668611 | orchestrator | ~ 2025-06-02 00:17:42.668698 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 00:17:42.668713 | orchestrator | + deactivate 2025-06-02 00:17:42.668726 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-02 00:17:42.668738 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 00:17:42.668749 | orchestrator | + export PATH 2025-06-02 00:17:42.668760 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-02 00:17:42.668770 | orchestrator | + '[' -n '' ']' 2025-06-02 00:17:42.668783 | orchestrator | + hash -r 2025-06-02 00:17:42.668794 | orchestrator | + '[' -n '' ']' 2025-06-02 00:17:42.668805 | orchestrator | + unset VIRTUAL_ENV 2025-06-02 00:17:42.668815 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-02 00:17:42.668826 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-02 00:17:42.668836 | orchestrator | + unset -f deactivate 2025-06-02 00:17:42.668847 | orchestrator | + popd 2025-06-02 00:17:42.669772 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-02 00:17:42.669797 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-02 00:17:42.671057 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-02 00:17:42.723467 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 00:17:42.723556 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-02 00:17:42.723570 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-02 00:17:42.762716 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 00:17:42.762787 | orchestrator | + source /opt/venv/bin/activate 2025-06-02 00:17:42.762816 | orchestrator | ++ deactivate nondestructive 2025-06-02 00:17:42.762829 | orchestrator | ++ '[' -n '' ']' 2025-06-02 00:17:42.762840 | orchestrator | ++ '[' -n '' ']' 2025-06-02 00:17:42.762851 | orchestrator | ++ hash -r 2025-06-02 00:17:42.762889 | orchestrator | ++ '[' -n '' ']' 2025-06-02 00:17:42.762901 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-02 00:17:42.762912 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-02 00:17:42.762923 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-02 00:17:42.763344 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-02 00:17:42.763361 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-02 00:17:42.763372 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-02 00:17:42.763474 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-02 00:17:42.763491 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 00:17:42.763596 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 00:17:42.763634 | orchestrator | ++ export PATH 2025-06-02 00:17:42.763646 | orchestrator | ++ '[' -n '' ']' 2025-06-02 00:17:42.763657 | orchestrator | ++ '[' -z '' ']' 2025-06-02 00:17:42.763667 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-02 00:17:42.763817 | orchestrator | ++ PS1='(venv) ' 2025-06-02 00:17:42.763833 | orchestrator | ++ export PS1 2025-06-02 00:17:42.763845 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-02 00:17:42.763856 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-02 00:17:42.763870 | orchestrator | ++ hash -r 2025-06-02 00:17:42.763984 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-02 00:17:43.836881 | orchestrator | 2025-06-02 00:17:43.837050 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-02 00:17:43.837068 | orchestrator | 2025-06-02 00:17:43.837080 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 00:17:44.395739 | orchestrator | ok: [testbed-manager] 2025-06-02 00:17:44.395850 | orchestrator | 2025-06-02 00:17:44.395866 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 00:17:45.331093 | orchestrator | changed: [testbed-manager] 2025-06-02 00:17:45.331208 | orchestrator | 2025-06-02 00:17:45.331224 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-02 00:17:45.331237 | orchestrator | 2025-06-02 00:17:45.331248 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 00:17:47.592804 | orchestrator | ok: [testbed-manager] 2025-06-02 00:17:47.592978 | orchestrator | 2025-06-02 00:17:47.592999 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-02 00:17:47.645326 | orchestrator | ok: [testbed-manager] 2025-06-02 00:17:47.645398 | orchestrator | 2025-06-02 00:17:47.645412 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-02 00:17:48.099793 | orchestrator | changed: [testbed-manager] 2025-06-02 00:17:48.099900 | orchestrator | 2025-06-02 00:17:48.099919 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-02 00:17:48.135209 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:17:48.135278 | orchestrator | 2025-06-02 00:17:48.135293 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 00:17:48.461897 | orchestrator | changed: [testbed-manager] 2025-06-02 00:17:48.462094 | orchestrator | 2025-06-02 00:17:48.462112 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-02 00:17:48.504708 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:17:48.504780 | orchestrator | 2025-06-02 00:17:48.504793 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-02 00:17:48.805263 | orchestrator | ok: [testbed-manager] 2025-06-02 00:17:48.805337 | orchestrator | 2025-06-02 00:17:48.805345 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-02 00:17:48.910574 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:17:48.910668 | orchestrator | 2025-06-02 00:17:48.910682 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-02 00:17:48.910695 | orchestrator | 2025-06-02 00:17:48.910707 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 00:17:50.710772 | orchestrator | ok: [testbed-manager] 2025-06-02 00:17:50.710884 | orchestrator | 2025-06-02 00:17:50.710901 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-02 00:17:50.803256 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-02 00:17:50.803350 | orchestrator | 2025-06-02 00:17:50.803364 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-02 00:17:50.871324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-02 00:17:50.871400 | orchestrator | 2025-06-02 00:17:50.871414 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-02 00:17:51.911395 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-02 00:17:51.911499 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-02 00:17:51.911517 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-02 00:17:51.911529 | orchestrator | 2025-06-02 00:17:51.911542 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-02 00:17:53.664069 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-02 00:17:53.664182 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-02 00:17:53.664198 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-02 00:17:53.664211 | orchestrator | 2025-06-02 00:17:53.664224 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-02 00:17:54.297869 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 00:17:54.298068 | orchestrator | changed: [testbed-manager] 2025-06-02 00:17:54.298089 | orchestrator | 2025-06-02 00:17:54.298102 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-02 00:17:54.932631 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 00:17:54.932738 | orchestrator | changed: [testbed-manager] 2025-06-02 00:17:54.932754 | orchestrator | 2025-06-02 00:17:54.932767 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-02 00:17:54.989000 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:17:54.989075 | orchestrator | 2025-06-02 00:17:54.989088 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-02 00:17:55.352795 | orchestrator | ok: [testbed-manager] 2025-06-02 00:17:55.352892 | orchestrator | 2025-06-02 00:17:55.352908 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-02 00:17:55.419129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-02 00:17:55.419193 | orchestrator | 2025-06-02 00:17:55.419207 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-02 00:17:56.447236 | orchestrator | changed: [testbed-manager] 2025-06-02 00:17:56.447341 | orchestrator | 2025-06-02 00:17:56.447356 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-02 00:17:57.197859 | orchestrator | changed: [testbed-manager] 2025-06-02 00:17:57.198082 | orchestrator | 2025-06-02 00:17:57.198103 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-02 00:18:08.385039 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:08.385152 | orchestrator | 2025-06-02 00:18:08.385191 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-02 00:18:08.432102 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:18:08.432179 | orchestrator | 2025-06-02 00:18:08.432192 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-02 00:18:08.432204 | orchestrator | 2025-06-02 00:18:08.432215 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 00:18:10.299854 | orchestrator | ok: [testbed-manager] 2025-06-02 00:18:10.299953 | orchestrator | 2025-06-02 00:18:10.299969 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-02 00:18:10.394365 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-02 00:18:10.394449 | orchestrator | 2025-06-02 00:18:10.394456 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-02 00:18:10.441788 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 00:18:10.441868 | orchestrator | 2025-06-02 00:18:10.441878 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-02 00:18:12.782581 | orchestrator | ok: [testbed-manager] 2025-06-02 00:18:12.782688 | orchestrator | 2025-06-02 00:18:12.782705 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-02 00:18:12.817222 | orchestrator | ok: [testbed-manager] 2025-06-02 00:18:12.817260 | orchestrator | 2025-06-02 00:18:12.817272 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-02 00:18:12.935431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-02 00:18:12.935520 | orchestrator | 2025-06-02 00:18:12.935534 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-02 00:18:15.752378 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-02 00:18:15.752453 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-02 00:18:15.752459 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-02 00:18:15.752464 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-02 00:18:15.752469 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-02 00:18:15.752473 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-02 00:18:15.752478 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-02 00:18:15.752482 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-02 00:18:15.752486 | orchestrator | 2025-06-02 00:18:15.752492 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-02 00:18:16.374330 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:16.374460 | orchestrator | 2025-06-02 00:18:16.374484 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-02 00:18:17.020468 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:17.020581 | orchestrator | 2025-06-02 00:18:17.020596 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-02 00:18:17.099360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-02 00:18:17.099469 | orchestrator | 2025-06-02 00:18:17.099494 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-02 00:18:18.294923 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-02 00:18:18.295090 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-02 00:18:18.295115 | orchestrator | 2025-06-02 00:18:18.295128 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-02 00:18:18.886913 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:18.887080 | orchestrator | 2025-06-02 00:18:18.887108 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-02 00:18:18.929343 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:18:18.929392 | orchestrator | 2025-06-02 00:18:18.929408 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-02 00:18:18.979587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-02 00:18:18.979684 | orchestrator | 2025-06-02 00:18:18.979700 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-02 00:18:20.318521 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 00:18:20.318618 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 00:18:20.318673 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:20.318686 | orchestrator | 2025-06-02 00:18:20.318697 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-02 00:18:20.930914 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:20.931071 | orchestrator | 2025-06-02 00:18:20.931091 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-02 00:18:20.984267 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:18:20.984356 | orchestrator | 2025-06-02 00:18:20.984369 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-02 00:18:21.077509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-02 00:18:21.077594 | orchestrator | 2025-06-02 00:18:21.077605 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-02 00:18:21.604878 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:21.604994 | orchestrator | 2025-06-02 00:18:21.605085 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-02 00:18:22.010842 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:22.010947 | orchestrator | 2025-06-02 00:18:22.010963 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-02 00:18:23.208809 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-02 00:18:23.208932 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-02 00:18:23.208951 | orchestrator | 2025-06-02 00:18:23.208963 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-02 00:18:23.822118 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:23.822236 | orchestrator | 2025-06-02 00:18:23.822252 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-02 00:18:24.214349 | orchestrator | ok: [testbed-manager] 2025-06-02 00:18:24.214451 | orchestrator | 2025-06-02 00:18:24.214465 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-02 00:18:24.566830 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:24.566935 | orchestrator | 2025-06-02 00:18:24.566950 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-02 00:18:24.614553 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:18:24.614640 | orchestrator | 2025-06-02 00:18:24.614654 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-02 00:18:24.677646 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-02 00:18:24.677743 | orchestrator | 2025-06-02 00:18:24.677759 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-02 00:18:24.725330 | orchestrator | ok: [testbed-manager] 2025-06-02 00:18:24.725402 | orchestrator | 2025-06-02 00:18:24.725416 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-02 00:18:26.694404 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-02 00:18:26.694552 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-02 00:18:26.694569 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-02 00:18:26.694581 | orchestrator | 2025-06-02 00:18:26.694594 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-02 00:18:27.385751 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:27.385877 | orchestrator | 2025-06-02 00:18:27.385897 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-02 00:18:28.081403 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:28.081513 | orchestrator | 2025-06-02 00:18:28.081530 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-02 00:18:28.766376 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:28.766479 | orchestrator | 2025-06-02 00:18:28.766494 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-02 00:18:28.846908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-02 00:18:28.847015 | orchestrator | 2025-06-02 00:18:28.847061 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-02 00:18:28.897677 | orchestrator | ok: [testbed-manager] 2025-06-02 00:18:28.897744 | orchestrator | 2025-06-02 00:18:28.897758 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-02 00:18:29.574357 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-02 00:18:29.574483 | orchestrator | 2025-06-02 00:18:29.574500 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-02 00:18:29.649494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-02 00:18:29.649601 | orchestrator | 2025-06-02 00:18:29.649616 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-02 00:18:30.331835 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:30.331940 | orchestrator | 2025-06-02 00:18:30.331956 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-02 00:18:30.928739 | orchestrator | ok: [testbed-manager] 2025-06-02 00:18:30.928832 | orchestrator | 2025-06-02 00:18:30.928843 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-02 00:18:30.981174 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:18:30.981267 | orchestrator | 2025-06-02 00:18:30.981280 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-02 00:18:31.032275 | orchestrator | ok: [testbed-manager] 2025-06-02 00:18:31.032364 | orchestrator | 2025-06-02 00:18:31.032379 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-02 00:18:31.803268 | orchestrator | changed: [testbed-manager] 2025-06-02 00:18:31.803370 | orchestrator | 2025-06-02 00:18:31.803385 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-02 00:19:33.287073 | orchestrator | changed: [testbed-manager] 2025-06-02 00:19:33.287257 | orchestrator | 2025-06-02 00:19:33.287278 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-02 00:19:34.227956 | orchestrator | ok: [testbed-manager] 2025-06-02 00:19:34.228058 | orchestrator | 2025-06-02 00:19:34.228073 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-02 00:19:34.287386 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:19:34.287475 | orchestrator | 2025-06-02 00:19:34.287488 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-02 00:19:36.983784 | orchestrator | changed: [testbed-manager] 2025-06-02 00:19:36.983890 | orchestrator | 2025-06-02 00:19:36.983906 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-02 00:19:37.045821 | orchestrator | ok: [testbed-manager] 2025-06-02 00:19:37.045929 | orchestrator | 2025-06-02 00:19:37.045948 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 00:19:37.045962 | orchestrator | 2025-06-02 00:19:37.045973 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-02 00:19:37.086693 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:19:37.086796 | orchestrator | 2025-06-02 00:19:37.086844 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-02 00:20:37.137596 | orchestrator | Pausing for 60 seconds 2025-06-02 00:20:37.137760 | orchestrator | changed: [testbed-manager] 2025-06-02 00:20:37.137788 | orchestrator | 2025-06-02 00:20:37.137809 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-02 00:20:41.733089 | orchestrator | changed: [testbed-manager] 2025-06-02 00:20:41.733261 | orchestrator | 2025-06-02 00:20:41.733318 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-02 00:21:23.237719 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-02 00:21:23.237839 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-02 00:21:23.237855 | orchestrator | changed: [testbed-manager] 2025-06-02 00:21:23.237869 | orchestrator | 2025-06-02 00:21:23.237880 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-02 00:21:31.476650 | orchestrator | changed: [testbed-manager] 2025-06-02 00:21:31.476791 | orchestrator | 2025-06-02 00:21:31.476832 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-02 00:21:31.547528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-02 00:21:31.547620 | orchestrator | 2025-06-02 00:21:31.547634 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 00:21:31.547647 | orchestrator | 2025-06-02 00:21:31.547658 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-02 00:21:31.606530 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:21:31.606625 | orchestrator | 2025-06-02 00:21:31.606638 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:21:31.606651 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 00:21:31.606661 | orchestrator | 2025-06-02 00:21:31.707033 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 00:21:31.707126 | orchestrator | + deactivate 2025-06-02 00:21:31.707141 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-02 00:21:31.707155 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 00:21:31.707166 | orchestrator | + export PATH 2025-06-02 00:21:31.707181 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-02 00:21:31.707193 | orchestrator | + '[' -n '' ']' 2025-06-02 00:21:31.707205 | orchestrator | + hash -r 2025-06-02 00:21:31.707226 | orchestrator | + '[' -n '' ']' 2025-06-02 00:21:31.707238 | orchestrator | + unset VIRTUAL_ENV 2025-06-02 00:21:31.707249 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-02 00:21:31.707260 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-02 00:21:31.707271 | orchestrator | + unset -f deactivate 2025-06-02 00:21:31.707282 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-02 00:21:31.713387 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 00:21:31.713412 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 00:21:31.713424 | orchestrator | + local max_attempts=60 2025-06-02 00:21:31.713435 | orchestrator | + local name=ceph-ansible 2025-06-02 00:21:31.713446 | orchestrator | + local attempt_num=1 2025-06-02 00:21:31.714543 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 00:21:31.751754 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 00:21:31.751814 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 00:21:31.751826 | orchestrator | + local max_attempts=60 2025-06-02 00:21:31.751837 | orchestrator | + local name=kolla-ansible 2025-06-02 00:21:31.751848 | orchestrator | + local attempt_num=1 2025-06-02 00:21:31.752491 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 00:21:31.782189 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 00:21:31.782229 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 00:21:31.782241 | orchestrator | + local max_attempts=60 2025-06-02 00:21:31.782252 | orchestrator | + local name=osism-ansible 2025-06-02 00:21:31.782263 | orchestrator | + local attempt_num=1 2025-06-02 00:21:31.783215 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 00:21:31.816634 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 00:21:31.816715 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 00:21:31.816725 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 00:21:32.474426 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-02 00:21:32.652161 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-02 00:21:32.652258 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-02 00:21:32.652274 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-02 00:21:32.652286 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-02 00:21:32.652298 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-02 00:21:32.652307 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-02 00:21:32.652317 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-02 00:21:32.652327 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-06-02 00:21:32.652336 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-02 00:21:32.652399 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-02 00:21:32.652411 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-02 00:21:32.652421 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-02 00:21:32.652431 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-06-02 00:21:32.652440 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-02 00:21:32.652450 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-02 00:21:32.652460 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-02 00:21:32.660120 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-02 00:21:32.711670 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 00:21:32.711765 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-02 00:21:32.713473 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-02 00:21:34.392028 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:21:34.392121 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:21:34.392135 | orchestrator | Registering Redlock._release_script 2025-06-02 00:21:34.585374 | orchestrator | 2025-06-02 00:21:34 | INFO  | Task df2a7c90-d5c5-46b7-9b9e-1f4bc5b94f7b (resolvconf) was prepared for execution. 2025-06-02 00:21:34.585475 | orchestrator | 2025-06-02 00:21:34 | INFO  | It takes a moment until task df2a7c90-d5c5-46b7-9b9e-1f4bc5b94f7b (resolvconf) has been started and output is visible here. 2025-06-02 00:21:38.380792 | orchestrator | 2025-06-02 00:21:38.380922 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-02 00:21:38.380939 | orchestrator | 2025-06-02 00:21:38.382598 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 00:21:38.384954 | orchestrator | Monday 02 June 2025 00:21:38 +0000 (0:00:00.142) 0:00:00.142 *********** 2025-06-02 00:21:41.771036 | orchestrator | ok: [testbed-manager] 2025-06-02 00:21:41.771169 | orchestrator | 2025-06-02 00:21:41.771188 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 00:21:41.771248 | orchestrator | Monday 02 June 2025 00:21:41 +0000 (0:00:03.394) 0:00:03.536 *********** 2025-06-02 00:21:41.817723 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:21:41.818338 | orchestrator | 2025-06-02 00:21:41.819946 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 00:21:41.821258 | orchestrator | Monday 02 June 2025 00:21:41 +0000 (0:00:00.048) 0:00:03.585 *********** 2025-06-02 00:21:41.898613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-02 00:21:41.899894 | orchestrator | 2025-06-02 00:21:41.901483 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 00:21:41.902377 | orchestrator | Monday 02 June 2025 00:21:41 +0000 (0:00:00.080) 0:00:03.665 *********** 2025-06-02 00:21:41.958082 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 00:21:41.959088 | orchestrator | 2025-06-02 00:21:41.960070 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 00:21:41.961293 | orchestrator | Monday 02 June 2025 00:21:41 +0000 (0:00:00.059) 0:00:03.724 *********** 2025-06-02 00:21:42.964671 | orchestrator | ok: [testbed-manager] 2025-06-02 00:21:42.965021 | orchestrator | 2025-06-02 00:21:42.966409 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 00:21:42.967053 | orchestrator | Monday 02 June 2025 00:21:42 +0000 (0:00:01.004) 0:00:04.729 *********** 2025-06-02 00:21:43.030982 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:21:43.031560 | orchestrator | 2025-06-02 00:21:43.032854 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 00:21:43.033969 | orchestrator | Monday 02 June 2025 00:21:43 +0000 (0:00:00.067) 0:00:04.796 *********** 2025-06-02 00:21:43.512730 | orchestrator | ok: [testbed-manager] 2025-06-02 00:21:43.512889 | orchestrator | 2025-06-02 00:21:43.512938 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 00:21:43.512975 | orchestrator | Monday 02 June 2025 00:21:43 +0000 (0:00:00.482) 0:00:05.279 *********** 2025-06-02 00:21:43.579877 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:21:43.581031 | orchestrator | 2025-06-02 00:21:43.581770 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 00:21:43.582707 | orchestrator | Monday 02 June 2025 00:21:43 +0000 (0:00:00.067) 0:00:05.346 *********** 2025-06-02 00:21:44.086530 | orchestrator | changed: [testbed-manager] 2025-06-02 00:21:44.086952 | orchestrator | 2025-06-02 00:21:44.088712 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 00:21:44.089295 | orchestrator | Monday 02 June 2025 00:21:44 +0000 (0:00:00.503) 0:00:05.849 *********** 2025-06-02 00:21:45.123004 | orchestrator | changed: [testbed-manager] 2025-06-02 00:21:45.123905 | orchestrator | 2025-06-02 00:21:45.124137 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 00:21:45.125358 | orchestrator | Monday 02 June 2025 00:21:45 +0000 (0:00:01.037) 0:00:06.887 *********** 2025-06-02 00:21:46.023027 | orchestrator | ok: [testbed-manager] 2025-06-02 00:21:46.023160 | orchestrator | 2025-06-02 00:21:46.025756 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 00:21:46.026142 | orchestrator | Monday 02 June 2025 00:21:46 +0000 (0:00:00.900) 0:00:07.787 *********** 2025-06-02 00:21:46.091781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-02 00:21:46.092833 | orchestrator | 2025-06-02 00:21:46.094395 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 00:21:46.096345 | orchestrator | Monday 02 June 2025 00:21:46 +0000 (0:00:00.071) 0:00:07.859 *********** 2025-06-02 00:21:47.196574 | orchestrator | changed: [testbed-manager] 2025-06-02 00:21:47.196661 | orchestrator | 2025-06-02 00:21:47.196980 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:21:47.197401 | orchestrator | 2025-06-02 00:21:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:21:47.197570 | orchestrator | 2025-06-02 00:21:47 | INFO  | Please wait and do not abort execution. 2025-06-02 00:21:47.198187 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 00:21:47.198497 | orchestrator | 2025-06-02 00:21:47.199045 | orchestrator | 2025-06-02 00:21:47.199416 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:21:47.199884 | orchestrator | Monday 02 June 2025 00:21:47 +0000 (0:00:01.103) 0:00:08.962 *********** 2025-06-02 00:21:47.199967 | orchestrator | =============================================================================== 2025-06-02 00:21:47.200188 | orchestrator | Gathering Facts --------------------------------------------------------- 3.39s 2025-06-02 00:21:47.200431 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.10s 2025-06-02 00:21:47.200717 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2025-06-02 00:21:47.200897 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.00s 2025-06-02 00:21:47.201096 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.90s 2025-06-02 00:21:47.201345 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2025-06-02 00:21:47.201728 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-06-02 00:21:47.201827 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-06-02 00:21:47.202139 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-06-02 00:21:47.202292 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-06-02 00:21:47.202534 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-06-02 00:21:47.202900 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2025-06-02 00:21:47.203223 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-06-02 00:21:47.624610 | orchestrator | + osism apply sshconfig 2025-06-02 00:21:49.235304 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:21:49.235497 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:21:49.235527 | orchestrator | Registering Redlock._release_script 2025-06-02 00:21:49.288631 | orchestrator | 2025-06-02 00:21:49 | INFO  | Task e4421848-b2ce-497c-b637-0f88ea673f9b (sshconfig) was prepared for execution. 2025-06-02 00:21:49.288746 | orchestrator | 2025-06-02 00:21:49 | INFO  | It takes a moment until task e4421848-b2ce-497c-b637-0f88ea673f9b (sshconfig) has been started and output is visible here. 2025-06-02 00:21:53.007776 | orchestrator | 2025-06-02 00:21:53.007869 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-02 00:21:53.008092 | orchestrator | 2025-06-02 00:21:53.009497 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-02 00:21:53.010260 | orchestrator | Monday 02 June 2025 00:21:52 +0000 (0:00:00.117) 0:00:00.117 *********** 2025-06-02 00:21:53.465952 | orchestrator | ok: [testbed-manager] 2025-06-02 00:21:53.466502 | orchestrator | 2025-06-02 00:21:53.467089 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-02 00:21:53.467993 | orchestrator | Monday 02 June 2025 00:21:53 +0000 (0:00:00.461) 0:00:00.579 *********** 2025-06-02 00:21:53.887631 | orchestrator | changed: [testbed-manager] 2025-06-02 00:21:53.887722 | orchestrator | 2025-06-02 00:21:53.888270 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-02 00:21:53.888914 | orchestrator | Monday 02 June 2025 00:21:53 +0000 (0:00:00.420) 0:00:01.000 *********** 2025-06-02 00:21:58.880888 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-02 00:21:58.880987 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-02 00:21:58.881284 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-02 00:21:58.882269 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-02 00:21:58.883838 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-02 00:21:58.884244 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-02 00:21:58.884799 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-02 00:21:58.885276 | orchestrator | 2025-06-02 00:21:58.885739 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-02 00:21:58.886233 | orchestrator | Monday 02 June 2025 00:21:58 +0000 (0:00:04.991) 0:00:05.991 *********** 2025-06-02 00:21:58.940099 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:21:58.940266 | orchestrator | 2025-06-02 00:21:58.941561 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-02 00:21:58.942176 | orchestrator | Monday 02 June 2025 00:21:58 +0000 (0:00:00.061) 0:00:06.053 *********** 2025-06-02 00:21:59.454123 | orchestrator | changed: [testbed-manager] 2025-06-02 00:21:59.454203 | orchestrator | 2025-06-02 00:21:59.456299 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:21:59.456327 | orchestrator | 2025-06-02 00:21:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:21:59.456342 | orchestrator | 2025-06-02 00:21:59 | INFO  | Please wait and do not abort execution. 2025-06-02 00:21:59.457135 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:21:59.457765 | orchestrator | 2025-06-02 00:21:59.458438 | orchestrator | 2025-06-02 00:21:59.459299 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:21:59.459972 | orchestrator | Monday 02 June 2025 00:21:59 +0000 (0:00:00.512) 0:00:06.565 *********** 2025-06-02 00:21:59.460557 | orchestrator | =============================================================================== 2025-06-02 00:21:59.461085 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.99s 2025-06-02 00:21:59.461626 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.51s 2025-06-02 00:21:59.462523 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.46s 2025-06-02 00:21:59.463006 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.42s 2025-06-02 00:21:59.463471 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-06-02 00:21:59.745948 | orchestrator | + osism apply known-hosts 2025-06-02 00:22:01.252683 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:22:01.252785 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:22:01.252801 | orchestrator | Registering Redlock._release_script 2025-06-02 00:22:01.307927 | orchestrator | 2025-06-02 00:22:01 | INFO  | Task eb8ce7c7-0098-49af-9652-e6fde2fc8d53 (known-hosts) was prepared for execution. 2025-06-02 00:22:01.307977 | orchestrator | 2025-06-02 00:22:01 | INFO  | It takes a moment until task eb8ce7c7-0098-49af-9652-e6fde2fc8d53 (known-hosts) has been started and output is visible here. 2025-06-02 00:22:05.138692 | orchestrator | 2025-06-02 00:22:05.139081 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-02 00:22:05.139890 | orchestrator | 2025-06-02 00:22:05.141384 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-02 00:22:05.142495 | orchestrator | Monday 02 June 2025 00:22:05 +0000 (0:00:00.157) 0:00:00.157 *********** 2025-06-02 00:22:10.968937 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 00:22:10.969054 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 00:22:10.969070 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 00:22:10.969618 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 00:22:10.969845 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 00:22:10.971282 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 00:22:10.971842 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 00:22:10.972515 | orchestrator | 2025-06-02 00:22:10.972702 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-02 00:22:10.973537 | orchestrator | Monday 02 June 2025 00:22:10 +0000 (0:00:05.830) 0:00:05.987 *********** 2025-06-02 00:22:11.132522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 00:22:11.132621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 00:22:11.132635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 00:22:11.132647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 00:22:11.132658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 00:22:11.132767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 00:22:11.136912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 00:22:11.137394 | orchestrator | 2025-06-02 00:22:11.137777 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:11.141178 | orchestrator | Monday 02 June 2025 00:22:11 +0000 (0:00:00.164) 0:00:06.152 *********** 2025-06-02 00:22:12.289963 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGKEvAa46Q4Rkj9lSAAp2o5B4GDC17+ciB7c8GRYnDfm) 2025-06-02 00:22:12.291651 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEyKfYKUxIAJkmf5tE3Nk8Lgu7QOVnbBXePHx4P1hhw8LK4HcFxVGVhZ739nr2f/30NJnEPIc9OcFfj1LTZCRluriF7p3pjdPfZTqnqUV3DcB25IG8kHMq9SoER82gFcppVj1VU/CVONkwuh3YUHL7BQa5Nddf5PznGr1sbGFDU2ZBZ8fFid+Az30xyXbGGh1bGCtjkVRs6PGTlJmTaMGNOdD805GTx396YgCcgbaa6cXILhy0F3N9+USWAWcXJGFUvnn4myAxXkNKIiunpt/pxal3wKr6Kby7aV/3Y4r+K0Rs6rIUZRrcIiyQV4nKUNVaTR9Y67sTpK7dExD0fzqk//MagdIK47TGljjLtAMuUc5WXz7xG3UM9JlRkSbOZdsLrLXn2VV+0Z2wUt1kCKsvbgK7ctctYegnrWNDei3eSw7blVGTf5YGVvuEFlisr6yL4il6S6BFSjdzwwarPwlLDC7oA7GxY6gGr2lt9sx9CFffTaMCxDfgRYR4p6A2tps=) 2025-06-02 00:22:12.292688 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNWbiMz9wspwWgcKudwrCa8JtI/7Nvu0aYeZI+36oLYwd5MMlSGDtEKhEpnV8LONtwyqEZiWcIyg8fgPv/FGCi8=) 2025-06-02 00:22:12.293540 | orchestrator | 2025-06-02 00:22:12.294172 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:12.294773 | orchestrator | Monday 02 June 2025 00:22:12 +0000 (0:00:01.157) 0:00:07.310 *********** 2025-06-02 00:22:13.252993 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNQ07wvvgVHMMeAgiCqssQjPKZq18+8S7hVjOWhUW7GGmkepYlh8rGlzGjaWNj5WkUwGKs5BpcIJHJN2xEvOCLc=) 2025-06-02 00:22:13.253098 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeVW4nX+fzR0zqD+70QEGMg0Hk22PE2B35AQTiQfjugXl9GQjKOXaPnaO40jKNtXuxzgP7Tu03Znwi0IZl3rFv8IIx7+AihHSfeGPYdpYQhM514//hwsF4PsHm+qNH2i5UqRMJ9pMvu39LY+Ow2N07/iXNnK9RO61uQYLp9NK39KEk7A5T0BWO1R7BUDcvD1aXLuwVmyKBvcVSsakos42mJp2MFNT/+AUBZdX4U9z8oZdy+UXzU4gttDfo3i2tWYISr992jIpfZwye3qG6flbopY/Y49qzFkyw0BMUmM3FadkAUHHvfggq83rqiJYDmDtSQfjZheIbjjYKy4Qnf2ZizxR1+FhgltaYuF7GkgiuhSJ3xzi7D8lHsk2GGEQY+VII8v4u8W/PwVhTMRm+/G8CoxwQYS4+cTG6gcBoyX4G8W9rM5Ck8req86VhCepeZ3GJYQDeLMlQ5w5/1D9laM7dfTJ8ww6xY3YuNyCOJehJvOxIq/7poqVMDQe+7BUYSrk=) 2025-06-02 00:22:13.254129 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFpTPk8+CSwSYa01wRWX8SoQw2WwFXF+du7LEvwWTBCh) 2025-06-02 00:22:13.255054 | orchestrator | 2025-06-02 00:22:13.255685 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:13.256381 | orchestrator | Monday 02 June 2025 00:22:13 +0000 (0:00:00.963) 0:00:08.273 *********** 2025-06-02 00:22:14.304485 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCf5gVPIG/doNrN0p8BnrxPCfr63SkEBNm1/0BMo/4gcYIwseFY8dhSfTESCQ4MuQIjCyWdVozMxdboSdyXNIkE=) 2025-06-02 00:22:14.305182 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMjRVhvWkikWq50WsZnmV/RSQPXLA/vLyuFsRPSeTBoE) 2025-06-02 00:22:14.306282 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+82eKXvrMXDbCjEWBWgczgIaquaTLrErQTjM93gQAHbm8Om2gGVHSpLa/xFBwNc5yG5VnYA6D9WeMp61QL59OZi3XZRmATYNKpj78D8RfAX/UGFqTFcqexXkhovPglv/XkBV8NfLrTIz1Qm9CyDoNZHiZjvSMgAdXy/3Mf6BLxC7FY//EnNKYTiZGaL1oMtQLq9ptRPQLRub9+1F1j3s6dodKDxd8nnpGBo0q4nDvo7xQZPDx9kRQjKmlPaQEi4QLgxe2hvnMVA0eqAtQEmgkfBWX2K9qpgvgYL1RUT423kMxDQ62Fepnpe7hZdxmRL3v6og+A3l7iv0zG4aiNKWOBnubzvgTQA6eI73F8ZuTR5JNcEckznPH2HosI1lptm3UjLlMWDglJl+dakfQISReiUrMF/BYUFfkmrJBS0IdRBbKaHwN91F1RrmxagmfWUCZ/+8rpQiZ965aewdoQOTjs2Bck50BOIuj3kSHvVXMS6H4nItU37t2FpXwxs+8EW0=) 2025-06-02 00:22:14.306957 | orchestrator | 2025-06-02 00:22:14.307786 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:14.308480 | orchestrator | Monday 02 June 2025 00:22:14 +0000 (0:00:01.049) 0:00:09.323 *********** 2025-06-02 00:22:15.305277 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqYzZ8p1fvkwPNbkLO2KZaL8xQ0biZbEt/xsko5fevMOxZzxAObK2idl8hgqWtrmC9Syh2edAJTmeGnckWHb4sYAxRalBBX0ZVGIWSKipeo6dPqhzDH72wR5XM233OyRdQpd13TB6sBQpd1Avv2EO7sPzJjO/2E4OKIokfuRZf4eGgXm9DFHHJTG1uu6xnQI3pK0T99atNvYVc+lhUDlzX8V2UantQWE8v144CZLris7qPfjjtUJ8L2xSG+hHbacO9GAgWQ4PysO/GIvWUcY/8g0MWalQrWVNA1el9le5lEgB2fAjKiliXD9zR/l6dv372xNp4Bw95ipKkLU1ba7iF68NEivtiJPal2OPj2fiYE/JHX9L3FccOTuB7B0/6V8ScTGhzMU1a04xROdNHmM21b/ubmaApJbK4uamJFnXE6bMq3fLA/hjz09EX+BoK1BQQe2vAmSJyiBI32V/YboBvvRveXBGnoXoPkIsmP7l7g4G+7kdxbt2XN2R6Pl/cSU0=) 2025-06-02 00:22:15.305644 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMHKo3k+85aNZY1a+vqTqaFF7qXf6s572TXXcBTep4C0OSzt/+YKhyuTb0/NMJQIqti8/q3elaLexPtOlvdWAIA=) 2025-06-02 00:22:15.306111 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBD9zz3zn4hRtxBXzCrmnXI5yfZztUpEaa/Zhec8yN9j) 2025-06-02 00:22:15.306878 | orchestrator | 2025-06-02 00:22:15.307464 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:15.307961 | orchestrator | Monday 02 June 2025 00:22:15 +0000 (0:00:01.003) 0:00:10.326 *********** 2025-06-02 00:22:16.356388 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnUeYUVu3Bp9E0g0MuGl5IACaua0aJZcDz6JRWCLHF2HeOAQYWje0//RxTMUIx233NzKDbg7gGYHTz860RAdEOVt4Reg5VTv5SeamPabTFs3za8RebPFOHwQH/QeGcJYAO7AOJ5ZYHcfNndXqHZBe22eC24dZh4J7oWWcD2Ha9HDKyV08khjO2AozEaPDr1/T2O02QKsvSX9o81UNMUoWHxP5vAH2+Yv8aQURrS7nN0XpF7q44TgL/njGObEAxTeLQcxBw49z+ZLmHmKHR0LaU89Ic2MShYRVK3AzLSHzqbGh0ai8KTFlBgWdpLQ1Epo9ZbynZQacW/k6n0615yF3RVMdQqUS6+58+O50fB0w7dF5ThvgGd1PNPRiOMbLCcY06xpqTKyHKeW0lHPW7WsuZk+w0LtkvwXGt8xt6JR8EwtHUAFhReZVRX0Ch26UE8jg2fVmYJc1fNZsnABFyULyUAREScTG/ze+37oBkHhcyQIZU3M5snFJ2IL4wUEdVgkM=) 2025-06-02 00:22:16.356632 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIZxq2Dbh6cIP9goruf5CxVSKPo3Oj6mVV1AdoovWWXg) 2025-06-02 00:22:16.359208 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCqfD5w1VH0+vH5Eluq8a/k/YUf72aPPP53CSsOs1MCP9NrwyEHLprTwkvAH+DW6l2LLDleGD+kbvLF51ZXolIg=) 2025-06-02 00:22:16.360044 | orchestrator | 2025-06-02 00:22:16.360728 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:16.361442 | orchestrator | Monday 02 June 2025 00:22:16 +0000 (0:00:01.051) 0:00:11.377 *********** 2025-06-02 00:22:17.374224 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+FheHQQ0U5FRY9vr0MUu6dK7yJQacGqyTFqcajVsD728Swl64iqXdtB9M+wcHsyRa6LMCDA4HQx65sp3F8GK55fZG4+Sn2pKMqAOil8ROlkvXsynG+qiqBJECOF5M+whY1cOGuy71nZSE3VBsr2rScySA33bgkoBnflz4amTWYtB6RTnlcvlx3PQ/YSiWo9lVhsSIAL5FG9xA6kqREuaxJgVluaRm3WUi6UHYyL+NPLn9PPfq7HEINRDFYqRx1jvWDMCO49wCsr7skHIkFt02lUCv3E0UPzlG7MHJipyPgxIrFent3lrh2gkUj1YkYUfEOK9nKLw/mb/CY5et84cs1ZhBCXzOy+P8gl+ci+k4W+XG8v58WYDbwtMDvYcfzXLpKJzSdl1hCJ4AnsMlg4SB4F8Z3aRP8KsqTqdDXWDJ9sVP/oN3AHKlYVC3l9pu6o66T/IeKn+bC9AfF3RjqGjrz7Y6DPt3SrfF91i8GK27oJFf15whmSxP+Jk+GKBo4tM=) 2025-06-02 00:22:17.374333 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFu8JSXlwQwCo+n/6fc2vnRgbAMELwxcvfJoXeRDc9nle5WovuArAG+nip9cSFJZx6W57yMDnVSYYc+YssnaXp8=) 2025-06-02 00:22:17.375066 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJzr9C8pvY9qTOiLUB5+DBxai1JpSLK5X1Q3o0dIGQjD) 2025-06-02 00:22:17.376841 | orchestrator | 2025-06-02 00:22:17.378007 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:17.378959 | orchestrator | Monday 02 June 2025 00:22:17 +0000 (0:00:01.016) 0:00:12.394 *********** 2025-06-02 00:22:18.382224 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOYYU9DfFBiHBtZRq5z8Vi/7PczvDujnEl4F8u6tjvklcEj3tI0FTUbFUhGXGAd/Pi/UQzSSpFkjlurPKwXDSXo=) 2025-06-02 00:22:18.382596 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID2pGqFshACRtEt3Mq1X7hMnsB1CA9RsR8JGVHDEvAzl) 2025-06-02 00:22:18.382917 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+MzhWRG7jHfcTQkIYGLnnXzpvK98K+kt3iEKA6BTIUqK5zJ+La3KlhmZsh0pvqdbwy6SpZ6xNTGUwjNkiHzInu/KkNePX7bmaZAJ/wR9MY3pOS7ikVAzjSexpAZcH/ArWT5iJiMIl99JKdrsTOszfF0gUxodQ5bnnriVmCLxm7c4IqZR2sGzXTxhk+qnn0Xb3j3ss7MnjZ+6OSzgXV8TXIegEIQkOWMEBU7fyc3n6zo0qCWllyS3eyJBY5iBxmdhNW3N85CAg4/mLC6XIVaRgPLSTaaZ9XBI/l01W0KtXgj2xfuvSJNJ8TTcHRcywwffNcu2Lf681f9cY6Xo9mjKBwLLf7T6SGb+HwENKlL8Rawr1l59nlfHmjdU6gaCU4PZKAdngI6sTQOcg0HdrSzxKc1hD3EQeHekHnm/abwvMcPepjvy/51IDPs6wKACpbcScZzFq2SISaHwp/R2YKzxrsk0lrK58K5JmPO/52t1xG/UddiaKmt2P1+ds8RhbDKc=) 2025-06-02 00:22:18.384026 | orchestrator | 2025-06-02 00:22:18.384592 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-02 00:22:18.385266 | orchestrator | Monday 02 June 2025 00:22:18 +0000 (0:00:01.007) 0:00:13.402 *********** 2025-06-02 00:22:23.528563 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 00:22:23.528677 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 00:22:23.529216 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 00:22:23.530887 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 00:22:23.531390 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 00:22:23.531761 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 00:22:23.532256 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 00:22:23.532805 | orchestrator | 2025-06-02 00:22:23.533665 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-02 00:22:23.533918 | orchestrator | Monday 02 June 2025 00:22:23 +0000 (0:00:05.144) 0:00:18.546 *********** 2025-06-02 00:22:23.689987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 00:22:23.690828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 00:22:23.690948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 00:22:23.691545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 00:22:23.692055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 00:22:23.692864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 00:22:23.693367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 00:22:23.693997 | orchestrator | 2025-06-02 00:22:23.694390 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:23.694803 | orchestrator | Monday 02 June 2025 00:22:23 +0000 (0:00:00.161) 0:00:18.709 *********** 2025-06-02 00:22:24.704536 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGKEvAa46Q4Rkj9lSAAp2o5B4GDC17+ciB7c8GRYnDfm) 2025-06-02 00:22:24.705344 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEyKfYKUxIAJkmf5tE3Nk8Lgu7QOVnbBXePHx4P1hhw8LK4HcFxVGVhZ739nr2f/30NJnEPIc9OcFfj1LTZCRluriF7p3pjdPfZTqnqUV3DcB25IG8kHMq9SoER82gFcppVj1VU/CVONkwuh3YUHL7BQa5Nddf5PznGr1sbGFDU2ZBZ8fFid+Az30xyXbGGh1bGCtjkVRs6PGTlJmTaMGNOdD805GTx396YgCcgbaa6cXILhy0F3N9+USWAWcXJGFUvnn4myAxXkNKIiunpt/pxal3wKr6Kby7aV/3Y4r+K0Rs6rIUZRrcIiyQV4nKUNVaTR9Y67sTpK7dExD0fzqk//MagdIK47TGljjLtAMuUc5WXz7xG3UM9JlRkSbOZdsLrLXn2VV+0Z2wUt1kCKsvbgK7ctctYegnrWNDei3eSw7blVGTf5YGVvuEFlisr6yL4il6S6BFSjdzwwarPwlLDC7oA7GxY6gGr2lt9sx9CFffTaMCxDfgRYR4p6A2tps=) 2025-06-02 00:22:24.706410 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNWbiMz9wspwWgcKudwrCa8JtI/7Nvu0aYeZI+36oLYwd5MMlSGDtEKhEpnV8LONtwyqEZiWcIyg8fgPv/FGCi8=) 2025-06-02 00:22:24.706465 | orchestrator | 2025-06-02 00:22:24.707150 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:24.708175 | orchestrator | Monday 02 June 2025 00:22:24 +0000 (0:00:01.016) 0:00:19.725 *********** 2025-06-02 00:22:25.694669 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeVW4nX+fzR0zqD+70QEGMg0Hk22PE2B35AQTiQfjugXl9GQjKOXaPnaO40jKNtXuxzgP7Tu03Znwi0IZl3rFv8IIx7+AihHSfeGPYdpYQhM514//hwsF4PsHm+qNH2i5UqRMJ9pMvu39LY+Ow2N07/iXNnK9RO61uQYLp9NK39KEk7A5T0BWO1R7BUDcvD1aXLuwVmyKBvcVSsakos42mJp2MFNT/+AUBZdX4U9z8oZdy+UXzU4gttDfo3i2tWYISr992jIpfZwye3qG6flbopY/Y49qzFkyw0BMUmM3FadkAUHHvfggq83rqiJYDmDtSQfjZheIbjjYKy4Qnf2ZizxR1+FhgltaYuF7GkgiuhSJ3xzi7D8lHsk2GGEQY+VII8v4u8W/PwVhTMRm+/G8CoxwQYS4+cTG6gcBoyX4G8W9rM5Ck8req86VhCepeZ3GJYQDeLMlQ5w5/1D9laM7dfTJ8ww6xY3YuNyCOJehJvOxIq/7poqVMDQe+7BUYSrk=) 2025-06-02 00:22:25.695342 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNQ07wvvgVHMMeAgiCqssQjPKZq18+8S7hVjOWhUW7GGmkepYlh8rGlzGjaWNj5WkUwGKs5BpcIJHJN2xEvOCLc=) 2025-06-02 00:22:25.695767 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFpTPk8+CSwSYa01wRWX8SoQw2WwFXF+du7LEvwWTBCh) 2025-06-02 00:22:25.696257 | orchestrator | 2025-06-02 00:22:25.696862 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:25.697350 | orchestrator | Monday 02 June 2025 00:22:25 +0000 (0:00:00.990) 0:00:20.715 *********** 2025-06-02 00:22:26.755807 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMjRVhvWkikWq50WsZnmV/RSQPXLA/vLyuFsRPSeTBoE) 2025-06-02 00:22:26.756583 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+82eKXvrMXDbCjEWBWgczgIaquaTLrErQTjM93gQAHbm8Om2gGVHSpLa/xFBwNc5yG5VnYA6D9WeMp61QL59OZi3XZRmATYNKpj78D8RfAX/UGFqTFcqexXkhovPglv/XkBV8NfLrTIz1Qm9CyDoNZHiZjvSMgAdXy/3Mf6BLxC7FY//EnNKYTiZGaL1oMtQLq9ptRPQLRub9+1F1j3s6dodKDxd8nnpGBo0q4nDvo7xQZPDx9kRQjKmlPaQEi4QLgxe2hvnMVA0eqAtQEmgkfBWX2K9qpgvgYL1RUT423kMxDQ62Fepnpe7hZdxmRL3v6og+A3l7iv0zG4aiNKWOBnubzvgTQA6eI73F8ZuTR5JNcEckznPH2HosI1lptm3UjLlMWDglJl+dakfQISReiUrMF/BYUFfkmrJBS0IdRBbKaHwN91F1RrmxagmfWUCZ/+8rpQiZ965aewdoQOTjs2Bck50BOIuj3kSHvVXMS6H4nItU37t2FpXwxs+8EW0=) 2025-06-02 00:22:26.757100 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCf5gVPIG/doNrN0p8BnrxPCfr63SkEBNm1/0BMo/4gcYIwseFY8dhSfTESCQ4MuQIjCyWdVozMxdboSdyXNIkE=) 2025-06-02 00:22:26.757751 | orchestrator | 2025-06-02 00:22:26.758506 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:26.759164 | orchestrator | Monday 02 June 2025 00:22:26 +0000 (0:00:01.060) 0:00:21.776 *********** 2025-06-02 00:22:27.766401 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqYzZ8p1fvkwPNbkLO2KZaL8xQ0biZbEt/xsko5fevMOxZzxAObK2idl8hgqWtrmC9Syh2edAJTmeGnckWHb4sYAxRalBBX0ZVGIWSKipeo6dPqhzDH72wR5XM233OyRdQpd13TB6sBQpd1Avv2EO7sPzJjO/2E4OKIokfuRZf4eGgXm9DFHHJTG1uu6xnQI3pK0T99atNvYVc+lhUDlzX8V2UantQWE8v144CZLris7qPfjjtUJ8L2xSG+hHbacO9GAgWQ4PysO/GIvWUcY/8g0MWalQrWVNA1el9le5lEgB2fAjKiliXD9zR/l6dv372xNp4Bw95ipKkLU1ba7iF68NEivtiJPal2OPj2fiYE/JHX9L3FccOTuB7B0/6V8ScTGhzMU1a04xROdNHmM21b/ubmaApJbK4uamJFnXE6bMq3fLA/hjz09EX+BoK1BQQe2vAmSJyiBI32V/YboBvvRveXBGnoXoPkIsmP7l7g4G+7kdxbt2XN2R6Pl/cSU0=) 2025-06-02 00:22:27.767272 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMHKo3k+85aNZY1a+vqTqaFF7qXf6s572TXXcBTep4C0OSzt/+YKhyuTb0/NMJQIqti8/q3elaLexPtOlvdWAIA=) 2025-06-02 00:22:27.767336 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBD9zz3zn4hRtxBXzCrmnXI5yfZztUpEaa/Zhec8yN9j) 2025-06-02 00:22:27.768084 | orchestrator | 2025-06-02 00:22:27.769558 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:27.769584 | orchestrator | Monday 02 June 2025 00:22:27 +0000 (0:00:01.010) 0:00:22.786 *********** 2025-06-02 00:22:28.785342 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnUeYUVu3Bp9E0g0MuGl5IACaua0aJZcDz6JRWCLHF2HeOAQYWje0//RxTMUIx233NzKDbg7gGYHTz860RAdEOVt4Reg5VTv5SeamPabTFs3za8RebPFOHwQH/QeGcJYAO7AOJ5ZYHcfNndXqHZBe22eC24dZh4J7oWWcD2Ha9HDKyV08khjO2AozEaPDr1/T2O02QKsvSX9o81UNMUoWHxP5vAH2+Yv8aQURrS7nN0XpF7q44TgL/njGObEAxTeLQcxBw49z+ZLmHmKHR0LaU89Ic2MShYRVK3AzLSHzqbGh0ai8KTFlBgWdpLQ1Epo9ZbynZQacW/k6n0615yF3RVMdQqUS6+58+O50fB0w7dF5ThvgGd1PNPRiOMbLCcY06xpqTKyHKeW0lHPW7WsuZk+w0LtkvwXGt8xt6JR8EwtHUAFhReZVRX0Ch26UE8jg2fVmYJc1fNZsnABFyULyUAREScTG/ze+37oBkHhcyQIZU3M5snFJ2IL4wUEdVgkM=) 2025-06-02 00:22:28.786180 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCqfD5w1VH0+vH5Eluq8a/k/YUf72aPPP53CSsOs1MCP9NrwyEHLprTwkvAH+DW6l2LLDleGD+kbvLF51ZXolIg=) 2025-06-02 00:22:28.787016 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIZxq2Dbh6cIP9goruf5CxVSKPo3Oj6mVV1AdoovWWXg) 2025-06-02 00:22:28.787684 | orchestrator | 2025-06-02 00:22:28.788366 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:28.789213 | orchestrator | Monday 02 June 2025 00:22:28 +0000 (0:00:01.019) 0:00:23.806 *********** 2025-06-02 00:22:29.786236 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJzr9C8pvY9qTOiLUB5+DBxai1JpSLK5X1Q3o0dIGQjD) 2025-06-02 00:22:29.787493 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+FheHQQ0U5FRY9vr0MUu6dK7yJQacGqyTFqcajVsD728Swl64iqXdtB9M+wcHsyRa6LMCDA4HQx65sp3F8GK55fZG4+Sn2pKMqAOil8ROlkvXsynG+qiqBJECOF5M+whY1cOGuy71nZSE3VBsr2rScySA33bgkoBnflz4amTWYtB6RTnlcvlx3PQ/YSiWo9lVhsSIAL5FG9xA6kqREuaxJgVluaRm3WUi6UHYyL+NPLn9PPfq7HEINRDFYqRx1jvWDMCO49wCsr7skHIkFt02lUCv3E0UPzlG7MHJipyPgxIrFent3lrh2gkUj1YkYUfEOK9nKLw/mb/CY5et84cs1ZhBCXzOy+P8gl+ci+k4W+XG8v58WYDbwtMDvYcfzXLpKJzSdl1hCJ4AnsMlg4SB4F8Z3aRP8KsqTqdDXWDJ9sVP/oN3AHKlYVC3l9pu6o66T/IeKn+bC9AfF3RjqGjrz7Y6DPt3SrfF91i8GK27oJFf15whmSxP+Jk+GKBo4tM=) 2025-06-02 00:22:29.788664 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFu8JSXlwQwCo+n/6fc2vnRgbAMELwxcvfJoXeRDc9nle5WovuArAG+nip9cSFJZx6W57yMDnVSYYc+YssnaXp8=) 2025-06-02 00:22:29.791043 | orchestrator | 2025-06-02 00:22:29.791797 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 00:22:29.792981 | orchestrator | Monday 02 June 2025 00:22:29 +0000 (0:00:01.001) 0:00:24.807 *********** 2025-06-02 00:22:30.816186 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+MzhWRG7jHfcTQkIYGLnnXzpvK98K+kt3iEKA6BTIUqK5zJ+La3KlhmZsh0pvqdbwy6SpZ6xNTGUwjNkiHzInu/KkNePX7bmaZAJ/wR9MY3pOS7ikVAzjSexpAZcH/ArWT5iJiMIl99JKdrsTOszfF0gUxodQ5bnnriVmCLxm7c4IqZR2sGzXTxhk+qnn0Xb3j3ss7MnjZ+6OSzgXV8TXIegEIQkOWMEBU7fyc3n6zo0qCWllyS3eyJBY5iBxmdhNW3N85CAg4/mLC6XIVaRgPLSTaaZ9XBI/l01W0KtXgj2xfuvSJNJ8TTcHRcywwffNcu2Lf681f9cY6Xo9mjKBwLLf7T6SGb+HwENKlL8Rawr1l59nlfHmjdU6gaCU4PZKAdngI6sTQOcg0HdrSzxKc1hD3EQeHekHnm/abwvMcPepjvy/51IDPs6wKACpbcScZzFq2SISaHwp/R2YKzxrsk0lrK58K5JmPO/52t1xG/UddiaKmt2P1+ds8RhbDKc=) 2025-06-02 00:22:30.816289 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOYYU9DfFBiHBtZRq5z8Vi/7PczvDujnEl4F8u6tjvklcEj3tI0FTUbFUhGXGAd/Pi/UQzSSpFkjlurPKwXDSXo=) 2025-06-02 00:22:30.818381 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID2pGqFshACRtEt3Mq1X7hMnsB1CA9RsR8JGVHDEvAzl) 2025-06-02 00:22:30.818828 | orchestrator | 2025-06-02 00:22:30.819746 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-02 00:22:30.820491 | orchestrator | Monday 02 June 2025 00:22:30 +0000 (0:00:01.029) 0:00:25.837 *********** 2025-06-02 00:22:30.960499 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 00:22:30.960685 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 00:22:30.961641 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 00:22:30.962619 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 00:22:30.963375 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 00:22:30.964229 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 00:22:30.964899 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 00:22:30.965564 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:22:30.966145 | orchestrator | 2025-06-02 00:22:30.966830 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-02 00:22:30.967560 | orchestrator | Monday 02 June 2025 00:22:30 +0000 (0:00:00.144) 0:00:25.981 *********** 2025-06-02 00:22:31.025313 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:22:31.025999 | orchestrator | 2025-06-02 00:22:31.026721 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-02 00:22:31.027471 | orchestrator | Monday 02 June 2025 00:22:31 +0000 (0:00:00.066) 0:00:26.047 *********** 2025-06-02 00:22:31.066682 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:22:31.066947 | orchestrator | 2025-06-02 00:22:31.067614 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-02 00:22:31.068398 | orchestrator | Monday 02 June 2025 00:22:31 +0000 (0:00:00.041) 0:00:26.089 *********** 2025-06-02 00:22:31.662369 | orchestrator | changed: [testbed-manager] 2025-06-02 00:22:31.663764 | orchestrator | 2025-06-02 00:22:31.664500 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:22:31.665054 | orchestrator | 2025-06-02 00:22:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:22:31.665308 | orchestrator | 2025-06-02 00:22:31 | INFO  | Please wait and do not abort execution. 2025-06-02 00:22:31.666700 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 00:22:31.667349 | orchestrator | 2025-06-02 00:22:31.667916 | orchestrator | 2025-06-02 00:22:31.668551 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:22:31.669161 | orchestrator | Monday 02 June 2025 00:22:31 +0000 (0:00:00.594) 0:00:26.683 *********** 2025-06-02 00:22:31.669697 | orchestrator | =============================================================================== 2025-06-02 00:22:31.670319 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.83s 2025-06-02 00:22:31.671258 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.14s 2025-06-02 00:22:31.671503 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-06-02 00:22:31.672094 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-02 00:22:31.672448 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-02 00:22:31.672869 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-02 00:22:31.673781 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-02 00:22:31.674974 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-02 00:22:31.675262 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-02 00:22:31.675815 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-02 00:22:31.675973 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-02 00:22:31.676423 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-02 00:22:31.676875 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-06-02 00:22:31.677294 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-06-02 00:22:31.677675 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-06-02 00:22:31.678125 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-06-02 00:22:31.678410 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.59s 2025-06-02 00:22:31.678830 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-06-02 00:22:31.679714 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-06-02 00:22:31.680410 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.14s 2025-06-02 00:22:32.101841 | orchestrator | + osism apply squid 2025-06-02 00:22:33.708161 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:22:33.708266 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:22:33.708283 | orchestrator | Registering Redlock._release_script 2025-06-02 00:22:33.769849 | orchestrator | 2025-06-02 00:22:33 | INFO  | Task 6c3300cf-6e8b-4981-81fc-cea3853e87af (squid) was prepared for execution. 2025-06-02 00:22:33.769944 | orchestrator | 2025-06-02 00:22:33 | INFO  | It takes a moment until task 6c3300cf-6e8b-4981-81fc-cea3853e87af (squid) has been started and output is visible here. 2025-06-02 00:22:37.582947 | orchestrator | 2025-06-02 00:22:37.583194 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-02 00:22:37.583894 | orchestrator | 2025-06-02 00:22:37.584656 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-02 00:22:37.586102 | orchestrator | Monday 02 June 2025 00:22:37 +0000 (0:00:00.159) 0:00:00.159 *********** 2025-06-02 00:22:37.661880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 00:22:37.662002 | orchestrator | 2025-06-02 00:22:37.662222 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-02 00:22:37.662883 | orchestrator | Monday 02 June 2025 00:22:37 +0000 (0:00:00.082) 0:00:00.242 *********** 2025-06-02 00:22:38.940412 | orchestrator | ok: [testbed-manager] 2025-06-02 00:22:38.940765 | orchestrator | 2025-06-02 00:22:38.941487 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-02 00:22:38.942233 | orchestrator | Monday 02 June 2025 00:22:38 +0000 (0:00:01.276) 0:00:01.518 *********** 2025-06-02 00:22:40.060031 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-02 00:22:40.060135 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-02 00:22:40.060150 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-02 00:22:40.060437 | orchestrator | 2025-06-02 00:22:40.060936 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-02 00:22:40.061470 | orchestrator | Monday 02 June 2025 00:22:40 +0000 (0:00:01.117) 0:00:02.636 *********** 2025-06-02 00:22:41.047266 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-02 00:22:41.048809 | orchestrator | 2025-06-02 00:22:41.049028 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-02 00:22:41.049740 | orchestrator | Monday 02 June 2025 00:22:41 +0000 (0:00:00.990) 0:00:03.626 *********** 2025-06-02 00:22:41.402316 | orchestrator | ok: [testbed-manager] 2025-06-02 00:22:41.402419 | orchestrator | 2025-06-02 00:22:41.402585 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-02 00:22:41.403043 | orchestrator | Monday 02 June 2025 00:22:41 +0000 (0:00:00.354) 0:00:03.981 *********** 2025-06-02 00:22:42.224607 | orchestrator | changed: [testbed-manager] 2025-06-02 00:22:42.224710 | orchestrator | 2025-06-02 00:22:42.226194 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-02 00:22:42.226301 | orchestrator | Monday 02 June 2025 00:22:42 +0000 (0:00:00.821) 0:00:04.803 *********** 2025-06-02 00:23:13.067688 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-02 00:23:13.067812 | orchestrator | ok: [testbed-manager] 2025-06-02 00:23:13.067829 | orchestrator | 2025-06-02 00:23:13.067841 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-02 00:23:13.067854 | orchestrator | Monday 02 June 2025 00:23:13 +0000 (0:00:30.839) 0:00:35.643 *********** 2025-06-02 00:23:25.497159 | orchestrator | changed: [testbed-manager] 2025-06-02 00:23:25.497279 | orchestrator | 2025-06-02 00:23:25.497302 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-02 00:23:25.497867 | orchestrator | Monday 02 June 2025 00:23:25 +0000 (0:00:12.431) 0:00:48.074 *********** 2025-06-02 00:24:25.570702 | orchestrator | Pausing for 60 seconds 2025-06-02 00:24:25.570852 | orchestrator | changed: [testbed-manager] 2025-06-02 00:24:25.570955 | orchestrator | 2025-06-02 00:24:25.570973 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-02 00:24:25.571293 | orchestrator | Monday 02 June 2025 00:24:25 +0000 (0:01:00.071) 0:01:48.145 *********** 2025-06-02 00:24:25.632857 | orchestrator | ok: [testbed-manager] 2025-06-02 00:24:25.634622 | orchestrator | 2025-06-02 00:24:25.637507 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-02 00:24:25.638526 | orchestrator | Monday 02 June 2025 00:24:25 +0000 (0:00:00.067) 0:01:48.213 *********** 2025-06-02 00:24:26.223813 | orchestrator | changed: [testbed-manager] 2025-06-02 00:24:26.223939 | orchestrator | 2025-06-02 00:24:26.225898 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:24:26.226078 | orchestrator | 2025-06-02 00:24:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:24:26.226100 | orchestrator | 2025-06-02 00:24:26 | INFO  | Please wait and do not abort execution. 2025-06-02 00:24:26.228098 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:24:26.228198 | orchestrator | 2025-06-02 00:24:26.228280 | orchestrator | 2025-06-02 00:24:26.229124 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:24:26.230169 | orchestrator | Monday 02 June 2025 00:24:26 +0000 (0:00:00.590) 0:01:48.803 *********** 2025-06-02 00:24:26.231363 | orchestrator | =============================================================================== 2025-06-02 00:24:26.232991 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-06-02 00:24:26.234119 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.84s 2025-06-02 00:24:26.234804 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.43s 2025-06-02 00:24:26.235547 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.28s 2025-06-02 00:24:26.236493 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.12s 2025-06-02 00:24:26.237233 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.99s 2025-06-02 00:24:26.237851 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.82s 2025-06-02 00:24:26.239278 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2025-06-02 00:24:26.239350 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-06-02 00:24:26.239787 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-06-02 00:24:26.241280 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-06-02 00:24:26.707712 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-02 00:24:26.707806 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-06-02 00:24:26.712291 | orchestrator | ++ semver 9.1.0 9.0.0 2025-06-02 00:24:26.774470 | orchestrator | + [[ 1 -lt 0 ]] 2025-06-02 00:24:26.775415 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-02 00:24:28.420860 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:24:28.420960 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:24:28.420977 | orchestrator | Registering Redlock._release_script 2025-06-02 00:24:28.480217 | orchestrator | 2025-06-02 00:24:28 | INFO  | Task 2c63fe05-98a2-4611-94c4-d92e0155f069 (operator) was prepared for execution. 2025-06-02 00:24:28.480294 | orchestrator | 2025-06-02 00:24:28 | INFO  | It takes a moment until task 2c63fe05-98a2-4611-94c4-d92e0155f069 (operator) has been started and output is visible here. 2025-06-02 00:24:32.322383 | orchestrator | 2025-06-02 00:24:32.322499 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-02 00:24:32.324076 | orchestrator | 2025-06-02 00:24:32.324151 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 00:24:32.325895 | orchestrator | Monday 02 June 2025 00:24:32 +0000 (0:00:00.145) 0:00:00.145 *********** 2025-06-02 00:24:35.511464 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:24:35.511553 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:24:35.511567 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:24:35.511631 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:24:35.511644 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:24:35.511654 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:24:35.511666 | orchestrator | 2025-06-02 00:24:35.511678 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-02 00:24:35.511689 | orchestrator | Monday 02 June 2025 00:24:35 +0000 (0:00:03.189) 0:00:03.334 *********** 2025-06-02 00:24:36.221864 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:24:36.222686 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:24:36.222732 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:24:36.223643 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:24:36.223736 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:24:36.224265 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:24:36.224862 | orchestrator | 2025-06-02 00:24:36.225641 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-02 00:24:36.226045 | orchestrator | 2025-06-02 00:24:36.227640 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 00:24:36.228061 | orchestrator | Monday 02 June 2025 00:24:36 +0000 (0:00:00.712) 0:00:04.047 *********** 2025-06-02 00:24:36.274491 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:24:36.299075 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:24:36.315377 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:24:36.354509 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:24:36.355007 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:24:36.355930 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:24:36.356035 | orchestrator | 2025-06-02 00:24:36.356214 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 00:24:36.356608 | orchestrator | Monday 02 June 2025 00:24:36 +0000 (0:00:00.133) 0:00:04.181 *********** 2025-06-02 00:24:36.403441 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:24:36.430288 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:24:36.451064 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:24:36.490082 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:24:36.490705 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:24:36.493390 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:24:36.495783 | orchestrator | 2025-06-02 00:24:36.495822 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 00:24:36.496667 | orchestrator | Monday 02 June 2025 00:24:36 +0000 (0:00:00.133) 0:00:04.314 *********** 2025-06-02 00:24:37.052267 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:24:37.052428 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:24:37.052815 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:24:37.053369 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:24:37.053884 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:24:37.054441 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:24:37.054908 | orchestrator | 2025-06-02 00:24:37.055340 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 00:24:37.055836 | orchestrator | Monday 02 June 2025 00:24:37 +0000 (0:00:00.560) 0:00:04.875 *********** 2025-06-02 00:24:37.839678 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:24:37.839823 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:24:37.840380 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:24:37.840850 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:24:37.841311 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:24:37.841772 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:24:37.842199 | orchestrator | 2025-06-02 00:24:37.843699 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 00:24:37.844175 | orchestrator | Monday 02 June 2025 00:24:37 +0000 (0:00:00.788) 0:00:05.664 *********** 2025-06-02 00:24:39.022365 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-02 00:24:39.025876 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-02 00:24:39.026132 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-02 00:24:39.026754 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-02 00:24:39.027693 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-02 00:24:39.031022 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-02 00:24:39.031489 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-02 00:24:39.035224 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-02 00:24:39.035761 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-02 00:24:39.038620 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-02 00:24:39.038875 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-02 00:24:39.039779 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-02 00:24:39.040026 | orchestrator | 2025-06-02 00:24:39.040786 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 00:24:39.041089 | orchestrator | Monday 02 June 2025 00:24:39 +0000 (0:00:01.183) 0:00:06.847 *********** 2025-06-02 00:24:40.184793 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:24:40.184884 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:24:40.185212 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:24:40.186214 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:24:40.187248 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:24:40.187768 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:24:40.188816 | orchestrator | 2025-06-02 00:24:40.189376 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 00:24:40.190311 | orchestrator | Monday 02 June 2025 00:24:40 +0000 (0:00:01.161) 0:00:08.009 *********** 2025-06-02 00:24:41.339921 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-02 00:24:41.340027 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-02 00:24:41.341782 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-02 00:24:41.465429 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 00:24:41.467490 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 00:24:41.467887 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 00:24:41.468645 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 00:24:41.469920 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 00:24:41.470923 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 00:24:41.471700 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-02 00:24:41.474826 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-02 00:24:41.475123 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-02 00:24:41.476144 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-02 00:24:41.477041 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-02 00:24:41.478631 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-02 00:24:41.478693 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-02 00:24:41.480714 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-02 00:24:41.480780 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-02 00:24:41.480901 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-02 00:24:41.481515 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-02 00:24:41.482281 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-02 00:24:41.483046 | orchestrator | 2025-06-02 00:24:41.483719 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 00:24:41.484122 | orchestrator | Monday 02 June 2025 00:24:41 +0000 (0:00:01.277) 0:00:09.287 *********** 2025-06-02 00:24:42.108429 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:24:42.108641 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:24:42.109010 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:24:42.109573 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:24:42.110496 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:24:42.110850 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:24:42.111093 | orchestrator | 2025-06-02 00:24:42.111396 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 00:24:42.111813 | orchestrator | Monday 02 June 2025 00:24:42 +0000 (0:00:00.646) 0:00:09.934 *********** 2025-06-02 00:24:42.172771 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:24:42.195002 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:24:42.216735 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:24:42.265972 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:24:42.266113 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:24:42.266400 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:24:42.266526 | orchestrator | 2025-06-02 00:24:42.266978 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 00:24:42.267263 | orchestrator | Monday 02 June 2025 00:24:42 +0000 (0:00:00.158) 0:00:10.092 *********** 2025-06-02 00:24:43.020783 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 00:24:43.021614 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:24:43.022374 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 00:24:43.023454 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:24:43.024256 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 00:24:43.025303 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:24:43.027412 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 00:24:43.028675 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:24:43.030006 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 00:24:43.032381 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:24:43.033608 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 00:24:43.034406 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:24:43.035258 | orchestrator | 2025-06-02 00:24:43.036028 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 00:24:43.036974 | orchestrator | Monday 02 June 2025 00:24:43 +0000 (0:00:00.752) 0:00:10.845 *********** 2025-06-02 00:24:43.064740 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:24:43.085018 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:24:43.104006 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:24:43.123319 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:24:43.162699 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:24:43.164826 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:24:43.166095 | orchestrator | 2025-06-02 00:24:43.167192 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 00:24:43.168307 | orchestrator | Monday 02 June 2025 00:24:43 +0000 (0:00:00.143) 0:00:10.988 *********** 2025-06-02 00:24:43.203867 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:24:43.222835 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:24:43.269401 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:24:43.303852 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:24:43.304848 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:24:43.305253 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:24:43.305870 | orchestrator | 2025-06-02 00:24:43.306663 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 00:24:43.306968 | orchestrator | Monday 02 June 2025 00:24:43 +0000 (0:00:00.139) 0:00:11.128 *********** 2025-06-02 00:24:43.350439 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:24:43.369557 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:24:43.388241 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:24:43.406840 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:24:43.433237 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:24:43.433693 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:24:43.437608 | orchestrator | 2025-06-02 00:24:43.437969 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 00:24:43.438690 | orchestrator | Monday 02 June 2025 00:24:43 +0000 (0:00:00.131) 0:00:11.259 *********** 2025-06-02 00:24:44.092032 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:24:44.094924 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:24:44.094962 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:24:44.095021 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:24:44.095034 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:24:44.095103 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:24:44.096211 | orchestrator | 2025-06-02 00:24:44.097176 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 00:24:44.098059 | orchestrator | Monday 02 June 2025 00:24:44 +0000 (0:00:00.657) 0:00:11.917 *********** 2025-06-02 00:24:44.180194 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:24:44.200097 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:24:44.292185 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:24:44.293886 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:24:44.295068 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:24:44.296499 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:24:44.298087 | orchestrator | 2025-06-02 00:24:44.299214 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:24:44.300157 | orchestrator | 2025-06-02 00:24:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:24:44.300191 | orchestrator | 2025-06-02 00:24:44 | INFO  | Please wait and do not abort execution. 2025-06-02 00:24:44.301539 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 00:24:44.302724 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 00:24:44.303828 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 00:24:44.304651 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 00:24:44.305908 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 00:24:44.306655 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 00:24:44.307518 | orchestrator | 2025-06-02 00:24:44.308520 | orchestrator | 2025-06-02 00:24:44.309362 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:24:44.310186 | orchestrator | Monday 02 June 2025 00:24:44 +0000 (0:00:00.200) 0:00:12.117 *********** 2025-06-02 00:24:44.310914 | orchestrator | =============================================================================== 2025-06-02 00:24:44.311828 | orchestrator | Gathering Facts --------------------------------------------------------- 3.19s 2025-06-02 00:24:44.312780 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2025-06-02 00:24:44.313270 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2025-06-02 00:24:44.314232 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.16s 2025-06-02 00:24:44.315244 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2025-06-02 00:24:44.316840 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.75s 2025-06-02 00:24:44.317285 | orchestrator | Do not require tty for all users ---------------------------------------- 0.71s 2025-06-02 00:24:44.318170 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2025-06-02 00:24:44.319210 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.65s 2025-06-02 00:24:44.320751 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.56s 2025-06-02 00:24:44.320777 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2025-06-02 00:24:44.320788 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-06-02 00:24:44.321326 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-06-02 00:24:44.322097 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-06-02 00:24:44.322287 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.13s 2025-06-02 00:24:44.322743 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.13s 2025-06-02 00:24:44.323037 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2025-06-02 00:24:44.767830 | orchestrator | + osism apply --environment custom facts 2025-06-02 00:24:46.378938 | orchestrator | 2025-06-02 00:24:46 | INFO  | Trying to run play facts in environment custom 2025-06-02 00:24:46.383790 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:24:46.383840 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:24:46.383854 | orchestrator | Registering Redlock._release_script 2025-06-02 00:24:46.439134 | orchestrator | 2025-06-02 00:24:46 | INFO  | Task 0b93b411-ee22-4358-8bc4-967aed9449f0 (facts) was prepared for execution. 2025-06-02 00:24:46.439208 | orchestrator | 2025-06-02 00:24:46 | INFO  | It takes a moment until task 0b93b411-ee22-4358-8bc4-967aed9449f0 (facts) has been started and output is visible here. 2025-06-02 00:24:50.202316 | orchestrator | 2025-06-02 00:24:50.203897 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-02 00:24:50.205086 | orchestrator | 2025-06-02 00:24:50.206386 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 00:24:50.207033 | orchestrator | Monday 02 June 2025 00:24:50 +0000 (0:00:00.086) 0:00:00.086 *********** 2025-06-02 00:24:51.673863 | orchestrator | ok: [testbed-manager] 2025-06-02 00:24:51.674382 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:24:51.674956 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:24:51.675579 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:24:51.677055 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:24:51.677570 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:24:51.678149 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:24:51.678685 | orchestrator | 2025-06-02 00:24:51.679131 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-02 00:24:51.679706 | orchestrator | Monday 02 June 2025 00:24:51 +0000 (0:00:01.467) 0:00:01.553 *********** 2025-06-02 00:24:52.909044 | orchestrator | ok: [testbed-manager] 2025-06-02 00:24:52.910289 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:24:52.911739 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:24:52.913152 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:24:52.914465 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:24:52.917405 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:24:52.918251 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:24:52.919638 | orchestrator | 2025-06-02 00:24:52.920359 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-02 00:24:52.921250 | orchestrator | 2025-06-02 00:24:52.922242 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 00:24:52.922743 | orchestrator | Monday 02 June 2025 00:24:52 +0000 (0:00:01.237) 0:00:02.791 *********** 2025-06-02 00:24:53.037767 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:24:53.038445 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:24:53.042795 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:24:53.042903 | orchestrator | 2025-06-02 00:24:53.043693 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 00:24:53.044447 | orchestrator | Monday 02 June 2025 00:24:53 +0000 (0:00:00.131) 0:00:02.922 *********** 2025-06-02 00:24:53.225378 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:24:53.226299 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:24:53.227105 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:24:53.229494 | orchestrator | 2025-06-02 00:24:53.230123 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 00:24:53.231041 | orchestrator | Monday 02 June 2025 00:24:53 +0000 (0:00:00.188) 0:00:03.111 *********** 2025-06-02 00:24:53.402485 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:24:53.403199 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:24:53.403285 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:24:53.403421 | orchestrator | 2025-06-02 00:24:53.403856 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 00:24:53.404230 | orchestrator | Monday 02 June 2025 00:24:53 +0000 (0:00:00.177) 0:00:03.289 *********** 2025-06-02 00:24:53.536257 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:24:53.537178 | orchestrator | 2025-06-02 00:24:53.538882 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 00:24:53.538909 | orchestrator | Monday 02 June 2025 00:24:53 +0000 (0:00:00.132) 0:00:03.421 *********** 2025-06-02 00:24:53.969364 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:24:53.970118 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:24:53.970780 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:24:53.971724 | orchestrator | 2025-06-02 00:24:53.972192 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 00:24:53.973765 | orchestrator | Monday 02 June 2025 00:24:53 +0000 (0:00:00.430) 0:00:03.852 *********** 2025-06-02 00:24:54.070669 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:24:54.072971 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:24:54.073046 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:24:54.073067 | orchestrator | 2025-06-02 00:24:54.073088 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 00:24:54.073108 | orchestrator | Monday 02 June 2025 00:24:54 +0000 (0:00:00.105) 0:00:03.957 *********** 2025-06-02 00:24:55.066898 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:24:55.067000 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:24:55.068282 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:24:55.068327 | orchestrator | 2025-06-02 00:24:55.068462 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 00:24:55.074756 | orchestrator | Monday 02 June 2025 00:24:55 +0000 (0:00:00.993) 0:00:04.951 *********** 2025-06-02 00:24:55.541707 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:24:55.541843 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:24:55.541932 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:24:55.541948 | orchestrator | 2025-06-02 00:24:55.542098 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 00:24:55.542436 | orchestrator | Monday 02 June 2025 00:24:55 +0000 (0:00:00.475) 0:00:05.426 *********** 2025-06-02 00:24:56.643068 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:24:56.644181 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:24:56.644787 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:24:56.645978 | orchestrator | 2025-06-02 00:24:56.646772 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 00:24:56.648033 | orchestrator | Monday 02 June 2025 00:24:56 +0000 (0:00:01.097) 0:00:06.524 *********** 2025-06-02 00:25:09.916201 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:25:09.916348 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:25:09.916369 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:25:09.916381 | orchestrator | 2025-06-02 00:25:09.916394 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-02 00:25:09.916406 | orchestrator | Monday 02 June 2025 00:25:09 +0000 (0:00:13.272) 0:00:19.796 *********** 2025-06-02 00:25:10.023754 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:25:10.023975 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:25:10.026064 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:25:10.026587 | orchestrator | 2025-06-02 00:25:10.027499 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-02 00:25:10.028154 | orchestrator | Monday 02 June 2025 00:25:10 +0000 (0:00:00.113) 0:00:19.909 *********** 2025-06-02 00:25:17.461430 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:25:17.462217 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:25:17.462308 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:25:17.463335 | orchestrator | 2025-06-02 00:25:17.465818 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 00:25:17.466644 | orchestrator | Monday 02 June 2025 00:25:17 +0000 (0:00:07.436) 0:00:27.345 *********** 2025-06-02 00:25:17.869934 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:17.872219 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:17.872297 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:17.872416 | orchestrator | 2025-06-02 00:25:17.873482 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 00:25:17.874102 | orchestrator | Monday 02 June 2025 00:25:17 +0000 (0:00:00.410) 0:00:27.756 *********** 2025-06-02 00:25:21.295643 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-02 00:25:21.297789 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-02 00:25:21.299468 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-02 00:25:21.300385 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-02 00:25:21.301663 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-02 00:25:21.302778 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-02 00:25:21.303989 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-02 00:25:21.304713 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-02 00:25:21.305365 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-02 00:25:21.305780 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-02 00:25:21.306285 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-02 00:25:21.306806 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-02 00:25:21.307917 | orchestrator | 2025-06-02 00:25:21.308510 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 00:25:21.308947 | orchestrator | Monday 02 June 2025 00:25:21 +0000 (0:00:03.423) 0:00:31.179 *********** 2025-06-02 00:25:22.526659 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:22.530258 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:22.530303 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:22.530316 | orchestrator | 2025-06-02 00:25:22.530329 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 00:25:22.531082 | orchestrator | 2025-06-02 00:25:22.532112 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 00:25:22.532885 | orchestrator | Monday 02 June 2025 00:25:22 +0000 (0:00:01.231) 0:00:32.410 *********** 2025-06-02 00:25:26.430921 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:26.431045 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:26.432572 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:26.432769 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:26.433535 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:26.434081 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:26.434856 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:26.435320 | orchestrator | 2025-06-02 00:25:26.435777 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:25:26.436216 | orchestrator | 2025-06-02 00:25:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:25:26.436355 | orchestrator | 2025-06-02 00:25:26 | INFO  | Please wait and do not abort execution. 2025-06-02 00:25:26.437254 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:25:26.437502 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:25:26.437970 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:25:26.438394 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:25:26.438769 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:25:26.440873 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:25:26.441525 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:25:26.442920 | orchestrator | 2025-06-02 00:25:26.443640 | orchestrator | 2025-06-02 00:25:26.444361 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:25:26.444609 | orchestrator | Monday 02 June 2025 00:25:26 +0000 (0:00:03.905) 0:00:36.316 *********** 2025-06-02 00:25:26.445052 | orchestrator | =============================================================================== 2025-06-02 00:25:26.445492 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.27s 2025-06-02 00:25:26.445908 | orchestrator | Install required packages (Debian) -------------------------------------- 7.44s 2025-06-02 00:25:26.446310 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.91s 2025-06-02 00:25:26.446714 | orchestrator | Copy fact files --------------------------------------------------------- 3.42s 2025-06-02 00:25:26.447130 | orchestrator | Create custom facts directory ------------------------------------------- 1.47s 2025-06-02 00:25:26.447543 | orchestrator | Copy fact file ---------------------------------------------------------- 1.24s 2025-06-02 00:25:26.447943 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.23s 2025-06-02 00:25:26.448373 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2025-06-02 00:25:26.448774 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.99s 2025-06-02 00:25:26.449159 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-06-02 00:25:26.449561 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-06-02 00:25:26.449894 | orchestrator | Create custom facts directory ------------------------------------------- 0.41s 2025-06-02 00:25:26.450305 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2025-06-02 00:25:26.450781 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2025-06-02 00:25:26.451027 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-06-02 00:25:26.451377 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2025-06-02 00:25:26.451925 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-06-02 00:25:26.452097 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-06-02 00:25:26.906007 | orchestrator | + osism apply bootstrap 2025-06-02 00:25:28.529536 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:25:28.529719 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:25:28.529739 | orchestrator | Registering Redlock._release_script 2025-06-02 00:25:28.584485 | orchestrator | 2025-06-02 00:25:28 | INFO  | Task a8eeaf26-5f72-4264-a07a-6a127e104d96 (bootstrap) was prepared for execution. 2025-06-02 00:25:28.584557 | orchestrator | 2025-06-02 00:25:28 | INFO  | It takes a moment until task a8eeaf26-5f72-4264-a07a-6a127e104d96 (bootstrap) has been started and output is visible here. 2025-06-02 00:25:32.241108 | orchestrator | 2025-06-02 00:25:32.241204 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-02 00:25:32.241480 | orchestrator | 2025-06-02 00:25:32.243348 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-02 00:25:32.244330 | orchestrator | Monday 02 June 2025 00:25:32 +0000 (0:00:00.118) 0:00:00.118 *********** 2025-06-02 00:25:32.312359 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:32.326007 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:32.347779 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:32.395279 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:32.395315 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:32.400233 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:32.400486 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:32.400901 | orchestrator | 2025-06-02 00:25:32.400922 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 00:25:32.401422 | orchestrator | 2025-06-02 00:25:32.401566 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 00:25:32.401793 | orchestrator | Monday 02 June 2025 00:25:32 +0000 (0:00:00.158) 0:00:00.277 *********** 2025-06-02 00:25:35.973023 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:35.975213 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:35.975304 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:35.976607 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:35.977334 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:35.978259 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:35.978792 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:35.979338 | orchestrator | 2025-06-02 00:25:35.979948 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-02 00:25:35.980527 | orchestrator | 2025-06-02 00:25:35.981171 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 00:25:35.981919 | orchestrator | Monday 02 June 2025 00:25:35 +0000 (0:00:03.575) 0:00:03.853 *********** 2025-06-02 00:25:36.047349 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 00:25:36.075337 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 00:25:36.075390 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 00:25:36.075579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-02 00:25:36.076066 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-02 00:25:36.076513 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 00:25:36.115029 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 00:25:36.115440 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-02 00:25:36.115803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:25:36.116194 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-02 00:25:36.117566 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-02 00:25:36.361713 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-02 00:25:36.363176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:25:36.365884 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 00:25:36.366985 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-02 00:25:36.368110 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-02 00:25:36.369096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 00:25:36.370068 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 00:25:36.370614 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:25:36.371526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:25:36.372617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 00:25:36.373167 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-02 00:25:36.374086 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 00:25:36.374843 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-02 00:25:36.375569 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 00:25:36.376068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 00:25:36.377014 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 00:25:36.378773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 00:25:36.379545 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-02 00:25:36.379941 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 00:25:36.382927 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:25:36.383455 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 00:25:36.384349 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-02 00:25:36.384446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 00:25:36.385084 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-02 00:25:36.385707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 00:25:36.386107 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:25:36.386832 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-02 00:25:36.389902 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 00:25:36.390122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 00:25:36.390747 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:25:36.391186 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-02 00:25:36.391790 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-02 00:25:36.392912 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 00:25:36.393173 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 00:25:36.393869 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-02 00:25:36.394295 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 00:25:36.394590 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:25:36.397983 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 00:25:36.398289 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-02 00:25:36.398825 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 00:25:36.399331 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 00:25:36.399701 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:25:36.400213 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 00:25:36.400728 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 00:25:36.402888 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:25:36.403168 | orchestrator | 2025-06-02 00:25:36.403418 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-02 00:25:36.403805 | orchestrator | 2025-06-02 00:25:36.404097 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-02 00:25:36.404417 | orchestrator | Monday 02 June 2025 00:25:36 +0000 (0:00:00.386) 0:00:04.239 *********** 2025-06-02 00:25:37.515980 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:37.516457 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:37.517720 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:37.518551 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:37.519433 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:37.520123 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:37.521121 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:37.521733 | orchestrator | 2025-06-02 00:25:37.522246 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-02 00:25:37.522722 | orchestrator | Monday 02 June 2025 00:25:37 +0000 (0:00:01.156) 0:00:05.396 *********** 2025-06-02 00:25:38.600980 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:38.601548 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:38.602322 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:38.602910 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:38.603273 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:38.603774 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:38.604225 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:38.605443 | orchestrator | 2025-06-02 00:25:38.605465 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-02 00:25:38.605734 | orchestrator | Monday 02 June 2025 00:25:38 +0000 (0:00:01.083) 0:00:06.480 *********** 2025-06-02 00:25:38.795841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:25:38.795917 | orchestrator | 2025-06-02 00:25:38.796165 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-02 00:25:38.796808 | orchestrator | Monday 02 June 2025 00:25:38 +0000 (0:00:00.196) 0:00:06.676 *********** 2025-06-02 00:25:40.685177 | orchestrator | changed: [testbed-manager] 2025-06-02 00:25:40.685251 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:25:40.685887 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:25:40.686128 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:25:40.686515 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:25:40.687006 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:25:40.688955 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:25:40.689035 | orchestrator | 2025-06-02 00:25:40.689051 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-02 00:25:40.690348 | orchestrator | Monday 02 June 2025 00:25:40 +0000 (0:00:01.884) 0:00:08.560 *********** 2025-06-02 00:25:40.756504 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:25:41.001482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:25:41.001757 | orchestrator | 2025-06-02 00:25:41.002573 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-02 00:25:41.003133 | orchestrator | Monday 02 June 2025 00:25:40 +0000 (0:00:00.321) 0:00:08.881 *********** 2025-06-02 00:25:41.975560 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:25:41.979357 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:25:41.979468 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:25:41.980313 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:25:41.980944 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:25:41.981897 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:25:41.982327 | orchestrator | 2025-06-02 00:25:41.983156 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-02 00:25:41.983548 | orchestrator | Monday 02 June 2025 00:25:41 +0000 (0:00:00.973) 0:00:09.855 *********** 2025-06-02 00:25:42.044725 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:25:42.510796 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:25:42.510925 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:25:42.511023 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:25:42.511147 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:25:42.511613 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:25:42.511919 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:25:42.512720 | orchestrator | 2025-06-02 00:25:42.513038 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-02 00:25:42.514342 | orchestrator | Monday 02 June 2025 00:25:42 +0000 (0:00:00.535) 0:00:10.390 *********** 2025-06-02 00:25:42.617418 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:25:42.655178 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:25:42.683992 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:25:42.933725 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:25:42.934988 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:25:42.936438 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:25:42.937991 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:42.939313 | orchestrator | 2025-06-02 00:25:42.940973 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 00:25:42.941026 | orchestrator | Monday 02 June 2025 00:25:42 +0000 (0:00:00.420) 0:00:10.811 *********** 2025-06-02 00:25:43.004569 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:25:43.024949 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:25:43.047904 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:25:43.071865 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:25:43.118190 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:25:43.119186 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:25:43.120277 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:25:43.121276 | orchestrator | 2025-06-02 00:25:43.122094 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 00:25:43.122421 | orchestrator | Monday 02 June 2025 00:25:43 +0000 (0:00:00.187) 0:00:10.999 *********** 2025-06-02 00:25:43.378898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:25:43.379040 | orchestrator | 2025-06-02 00:25:43.379128 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 00:25:43.379870 | orchestrator | Monday 02 June 2025 00:25:43 +0000 (0:00:00.259) 0:00:11.259 *********** 2025-06-02 00:25:43.659502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:25:43.659600 | orchestrator | 2025-06-02 00:25:43.660204 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 00:25:43.660944 | orchestrator | Monday 02 June 2025 00:25:43 +0000 (0:00:00.281) 0:00:11.540 *********** 2025-06-02 00:25:45.028301 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:45.029289 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:45.031169 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:45.032972 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:45.033872 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:45.034889 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:45.035384 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:45.036198 | orchestrator | 2025-06-02 00:25:45.037014 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 00:25:45.037740 | orchestrator | Monday 02 June 2025 00:25:45 +0000 (0:00:01.365) 0:00:12.906 *********** 2025-06-02 00:25:45.097838 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:25:45.123124 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:25:45.151922 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:25:45.181527 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:25:45.246372 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:25:45.246953 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:25:45.248716 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:25:45.249860 | orchestrator | 2025-06-02 00:25:45.251071 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 00:25:45.252055 | orchestrator | Monday 02 June 2025 00:25:45 +0000 (0:00:00.220) 0:00:13.126 *********** 2025-06-02 00:25:45.783535 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:45.784201 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:45.785058 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:45.785875 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:45.786878 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:45.787412 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:45.787909 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:45.788676 | orchestrator | 2025-06-02 00:25:45.789051 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 00:25:45.789872 | orchestrator | Monday 02 June 2025 00:25:45 +0000 (0:00:00.536) 0:00:13.662 *********** 2025-06-02 00:25:45.864462 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:25:45.884324 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:25:45.907463 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:25:45.941021 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:25:45.999753 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:25:46.000144 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:25:46.002294 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:25:46.002390 | orchestrator | 2025-06-02 00:25:46.003444 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 00:25:46.007047 | orchestrator | Monday 02 June 2025 00:25:45 +0000 (0:00:00.217) 0:00:13.880 *********** 2025-06-02 00:25:46.523071 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:46.523303 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:25:46.524064 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:25:46.524523 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:25:46.525373 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:25:46.525761 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:25:46.526355 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:25:46.527075 | orchestrator | 2025-06-02 00:25:46.527570 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 00:25:46.528069 | orchestrator | Monday 02 June 2025 00:25:46 +0000 (0:00:00.521) 0:00:14.402 *********** 2025-06-02 00:25:47.601344 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:47.601460 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:25:47.601538 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:25:47.601949 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:25:47.602582 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:25:47.602829 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:25:47.603302 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:25:47.603776 | orchestrator | 2025-06-02 00:25:47.604230 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 00:25:47.604600 | orchestrator | Monday 02 June 2025 00:25:47 +0000 (0:00:01.077) 0:00:15.479 *********** 2025-06-02 00:25:48.730313 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:48.731432 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:48.732784 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:48.733680 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:48.734861 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:48.736006 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:48.736430 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:48.737352 | orchestrator | 2025-06-02 00:25:48.737817 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 00:25:48.738424 | orchestrator | Monday 02 June 2025 00:25:48 +0000 (0:00:01.129) 0:00:16.609 *********** 2025-06-02 00:25:49.086445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:25:49.087505 | orchestrator | 2025-06-02 00:25:49.090735 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 00:25:49.091251 | orchestrator | Monday 02 June 2025 00:25:49 +0000 (0:00:00.356) 0:00:16.966 *********** 2025-06-02 00:25:49.158413 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:25:50.317272 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:25:50.318190 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:25:50.319177 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:25:50.320090 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:25:50.321431 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:25:50.322938 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:25:50.323788 | orchestrator | 2025-06-02 00:25:50.324712 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 00:25:50.325055 | orchestrator | Monday 02 June 2025 00:25:50 +0000 (0:00:01.228) 0:00:18.195 *********** 2025-06-02 00:25:50.383587 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:50.411259 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:50.442393 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:50.462205 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:50.535937 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:50.536703 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:50.537697 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:50.538171 | orchestrator | 2025-06-02 00:25:50.538773 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 00:25:50.539377 | orchestrator | Monday 02 June 2025 00:25:50 +0000 (0:00:00.219) 0:00:18.415 *********** 2025-06-02 00:25:50.607360 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:50.627549 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:50.674111 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:50.742125 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:50.743548 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:50.746141 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:50.746172 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:50.746184 | orchestrator | 2025-06-02 00:25:50.746882 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 00:25:50.747846 | orchestrator | Monday 02 June 2025 00:25:50 +0000 (0:00:00.206) 0:00:18.621 *********** 2025-06-02 00:25:50.816082 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:50.837626 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:50.860999 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:50.886850 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:50.937466 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:50.940856 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:50.940909 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:50.941201 | orchestrator | 2025-06-02 00:25:50.942224 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 00:25:50.943387 | orchestrator | Monday 02 June 2025 00:25:50 +0000 (0:00:00.195) 0:00:18.817 *********** 2025-06-02 00:25:51.209287 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:25:51.212719 | orchestrator | 2025-06-02 00:25:51.212783 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 00:25:51.212798 | orchestrator | Monday 02 June 2025 00:25:51 +0000 (0:00:00.269) 0:00:19.086 *********** 2025-06-02 00:25:51.734792 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:51.735249 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:51.736323 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:51.737472 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:51.738561 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:51.739341 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:51.739920 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:51.740989 | orchestrator | 2025-06-02 00:25:51.741724 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 00:25:51.742154 | orchestrator | Monday 02 June 2025 00:25:51 +0000 (0:00:00.527) 0:00:19.614 *********** 2025-06-02 00:25:51.824590 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:25:51.847392 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:25:51.868008 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:25:51.889499 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:25:51.948300 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:25:51.950306 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:25:51.951391 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:25:51.953088 | orchestrator | 2025-06-02 00:25:51.954297 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 00:25:51.954981 | orchestrator | Monday 02 June 2025 00:25:51 +0000 (0:00:00.214) 0:00:19.828 *********** 2025-06-02 00:25:52.935421 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:52.936862 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:52.937254 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:52.938543 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:25:52.939400 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:52.940205 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:25:52.941130 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:25:52.941926 | orchestrator | 2025-06-02 00:25:52.943107 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 00:25:52.944012 | orchestrator | Monday 02 June 2025 00:25:52 +0000 (0:00:00.985) 0:00:20.813 *********** 2025-06-02 00:25:53.482972 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:53.483079 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:53.483095 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:53.483890 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:53.483913 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:25:53.483925 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:25:53.484054 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:25:53.484444 | orchestrator | 2025-06-02 00:25:53.484858 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 00:25:53.485302 | orchestrator | Monday 02 June 2025 00:25:53 +0000 (0:00:00.547) 0:00:21.361 *********** 2025-06-02 00:25:54.586805 | orchestrator | ok: [testbed-manager] 2025-06-02 00:25:54.589675 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:25:54.590507 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:25:54.591531 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:25:54.592139 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:25:54.593406 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:25:54.594950 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:25:54.595495 | orchestrator | 2025-06-02 00:25:54.596398 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 00:25:54.596795 | orchestrator | Monday 02 June 2025 00:25:54 +0000 (0:00:01.103) 0:00:22.465 *********** 2025-06-02 00:26:08.098327 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:08.098483 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:08.099791 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:08.099825 | orchestrator | changed: [testbed-manager] 2025-06-02 00:26:08.099860 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:26:08.101210 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:26:08.102081 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:26:08.102991 | orchestrator | 2025-06-02 00:26:08.103644 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-02 00:26:08.104320 | orchestrator | Monday 02 June 2025 00:26:08 +0000 (0:00:13.509) 0:00:35.974 *********** 2025-06-02 00:26:08.170502 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:08.195789 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:08.223623 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:08.251385 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:08.295951 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:08.296181 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:08.296202 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:08.296627 | orchestrator | 2025-06-02 00:26:08.296899 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-02 00:26:08.297145 | orchestrator | Monday 02 June 2025 00:26:08 +0000 (0:00:00.202) 0:00:36.176 *********** 2025-06-02 00:26:08.386303 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:08.416082 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:08.436617 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:08.461909 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:08.517461 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:08.519680 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:08.519710 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:08.522066 | orchestrator | 2025-06-02 00:26:08.525950 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-02 00:26:08.526966 | orchestrator | Monday 02 June 2025 00:26:08 +0000 (0:00:00.220) 0:00:36.397 *********** 2025-06-02 00:26:08.588198 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:08.611828 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:08.636159 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:08.661108 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:08.714842 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:08.716193 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:08.716955 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:08.717797 | orchestrator | 2025-06-02 00:26:08.718612 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-02 00:26:08.719322 | orchestrator | Monday 02 June 2025 00:26:08 +0000 (0:00:00.197) 0:00:36.595 *********** 2025-06-02 00:26:08.975647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:26:08.976512 | orchestrator | 2025-06-02 00:26:08.978310 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-02 00:26:08.978346 | orchestrator | Monday 02 June 2025 00:26:08 +0000 (0:00:00.260) 0:00:36.856 *********** 2025-06-02 00:26:10.533825 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:10.535323 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:10.542231 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:10.542270 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:10.542840 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:10.543643 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:10.544184 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:10.545189 | orchestrator | 2025-06-02 00:26:10.545876 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-02 00:26:10.546495 | orchestrator | Monday 02 June 2025 00:26:10 +0000 (0:00:01.550) 0:00:38.407 *********** 2025-06-02 00:26:11.614660 | orchestrator | changed: [testbed-manager] 2025-06-02 00:26:11.615590 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:26:11.616604 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:26:11.617941 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:26:11.618910 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:26:11.619643 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:26:11.620610 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:26:11.620946 | orchestrator | 2025-06-02 00:26:11.621600 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-02 00:26:11.622313 | orchestrator | Monday 02 June 2025 00:26:11 +0000 (0:00:01.086) 0:00:39.493 *********** 2025-06-02 00:26:12.387549 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:12.388521 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:12.389508 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:12.390740 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:12.391810 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:12.392811 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:12.393470 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:12.394175 | orchestrator | 2025-06-02 00:26:12.394920 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-02 00:26:12.395391 | orchestrator | Monday 02 June 2025 00:26:12 +0000 (0:00:00.773) 0:00:40.266 *********** 2025-06-02 00:26:12.703931 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:26:12.704099 | orchestrator | 2025-06-02 00:26:12.705109 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-02 00:26:12.707977 | orchestrator | Monday 02 June 2025 00:26:12 +0000 (0:00:00.315) 0:00:40.582 *********** 2025-06-02 00:26:13.709815 | orchestrator | changed: [testbed-manager] 2025-06-02 00:26:13.710229 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:26:13.710916 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:26:13.711839 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:26:13.712626 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:26:13.713333 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:26:13.714214 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:26:13.715233 | orchestrator | 2025-06-02 00:26:13.715987 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-02 00:26:13.716839 | orchestrator | Monday 02 June 2025 00:26:13 +0000 (0:00:01.005) 0:00:41.587 *********** 2025-06-02 00:26:13.790955 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:26:13.824959 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:26:13.855564 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:26:13.887615 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:26:14.018741 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:26:14.019085 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:26:14.019195 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:26:14.020244 | orchestrator | 2025-06-02 00:26:14.020741 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-02 00:26:14.021130 | orchestrator | Monday 02 June 2025 00:26:14 +0000 (0:00:00.311) 0:00:41.899 *********** 2025-06-02 00:26:25.645628 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:26:25.645827 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:26:25.645845 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:26:25.645857 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:26:25.645943 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:26:25.647195 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:26:25.648428 | orchestrator | changed: [testbed-manager] 2025-06-02 00:26:25.649734 | orchestrator | 2025-06-02 00:26:25.650870 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-02 00:26:25.651833 | orchestrator | Monday 02 June 2025 00:26:25 +0000 (0:00:11.620) 0:00:53.519 *********** 2025-06-02 00:26:26.668499 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:26.669139 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:26.670298 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:26.670730 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:26.672164 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:26.672817 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:26.673523 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:26.674354 | orchestrator | 2025-06-02 00:26:26.674871 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-02 00:26:26.675571 | orchestrator | Monday 02 June 2025 00:26:26 +0000 (0:00:01.027) 0:00:54.547 *********** 2025-06-02 00:26:27.542378 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:27.544389 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:27.547649 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:27.548827 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:27.549856 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:27.550981 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:27.552100 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:27.552957 | orchestrator | 2025-06-02 00:26:27.555816 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-02 00:26:27.556329 | orchestrator | Monday 02 June 2025 00:26:27 +0000 (0:00:00.874) 0:00:55.422 *********** 2025-06-02 00:26:27.631489 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:27.654178 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:27.692875 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:27.716031 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:27.772083 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:27.772851 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:27.773397 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:27.774234 | orchestrator | 2025-06-02 00:26:27.775565 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-02 00:26:27.776355 | orchestrator | Monday 02 June 2025 00:26:27 +0000 (0:00:00.230) 0:00:55.652 *********** 2025-06-02 00:26:27.849028 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:27.872524 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:27.900999 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:27.923616 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:27.980186 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:27.981078 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:27.985036 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:27.985067 | orchestrator | 2025-06-02 00:26:27.985081 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-02 00:26:27.985095 | orchestrator | Monday 02 June 2025 00:26:27 +0000 (0:00:00.208) 0:00:55.860 *********** 2025-06-02 00:26:28.273637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:26:28.274951 | orchestrator | 2025-06-02 00:26:28.276373 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-02 00:26:28.277249 | orchestrator | Monday 02 June 2025 00:26:28 +0000 (0:00:00.292) 0:00:56.153 *********** 2025-06-02 00:26:29.758915 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:29.759367 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:29.760641 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:29.761858 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:29.763620 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:29.764124 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:29.765126 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:29.765949 | orchestrator | 2025-06-02 00:26:29.766514 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-02 00:26:29.767169 | orchestrator | Monday 02 June 2025 00:26:29 +0000 (0:00:01.483) 0:00:57.636 *********** 2025-06-02 00:26:30.317275 | orchestrator | changed: [testbed-manager] 2025-06-02 00:26:30.318357 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:26:30.318679 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:26:30.319778 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:26:30.320808 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:26:30.321944 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:26:30.322784 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:26:30.324116 | orchestrator | 2025-06-02 00:26:30.324619 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-02 00:26:30.325250 | orchestrator | Monday 02 June 2025 00:26:30 +0000 (0:00:00.559) 0:00:58.196 *********** 2025-06-02 00:26:30.390992 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:30.419300 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:30.438455 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:30.470463 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:30.534138 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:30.534642 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:30.535278 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:30.535680 | orchestrator | 2025-06-02 00:26:30.536394 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-02 00:26:30.537315 | orchestrator | Monday 02 June 2025 00:26:30 +0000 (0:00:00.217) 0:00:58.414 *********** 2025-06-02 00:26:31.616191 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:31.616854 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:31.620146 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:31.621316 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:31.621765 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:31.622253 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:31.622845 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:31.623170 | orchestrator | 2025-06-02 00:26:31.623894 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-02 00:26:31.624453 | orchestrator | Monday 02 June 2025 00:26:31 +0000 (0:00:01.079) 0:00:59.494 *********** 2025-06-02 00:26:33.096974 | orchestrator | changed: [testbed-manager] 2025-06-02 00:26:33.097153 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:26:33.098200 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:26:33.100347 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:26:33.101445 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:26:33.102344 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:26:33.103158 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:26:33.103941 | orchestrator | 2025-06-02 00:26:33.104473 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-02 00:26:33.105239 | orchestrator | Monday 02 June 2025 00:26:33 +0000 (0:00:01.481) 0:01:00.975 *********** 2025-06-02 00:26:35.109367 | orchestrator | ok: [testbed-manager] 2025-06-02 00:26:35.109528 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:26:35.110773 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:26:35.111414 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:26:35.112781 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:26:35.114126 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:26:35.115104 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:26:35.115988 | orchestrator | 2025-06-02 00:26:35.116812 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-02 00:26:35.117585 | orchestrator | Monday 02 June 2025 00:26:35 +0000 (0:00:02.009) 0:01:02.985 *********** 2025-06-02 00:27:14.291635 | orchestrator | ok: [testbed-manager] 2025-06-02 00:27:14.291820 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:27:14.291841 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:27:14.291853 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:27:14.291946 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:27:14.291963 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:27:14.291975 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:27:14.293417 | orchestrator | 2025-06-02 00:27:14.293881 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-02 00:27:14.295432 | orchestrator | Monday 02 June 2025 00:27:14 +0000 (0:00:39.181) 0:01:42.166 *********** 2025-06-02 00:28:27.339977 | orchestrator | changed: [testbed-manager] 2025-06-02 00:28:27.340098 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:28:27.340115 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:28:27.340188 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:28:27.340985 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:28:27.341520 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:28:27.342680 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:28:27.343192 | orchestrator | 2025-06-02 00:28:27.344335 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-02 00:28:27.344867 | orchestrator | Monday 02 June 2025 00:28:27 +0000 (0:01:13.046) 0:02:55.213 *********** 2025-06-02 00:28:28.983130 | orchestrator | ok: [testbed-manager] 2025-06-02 00:28:28.987500 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:28:28.987565 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:28:28.987578 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:28:28.987592 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:28:28.988237 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:28:28.988871 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:28:28.989332 | orchestrator | 2025-06-02 00:28:28.989810 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-02 00:28:28.990318 | orchestrator | Monday 02 June 2025 00:28:28 +0000 (0:00:01.650) 0:02:56.863 *********** 2025-06-02 00:28:39.679713 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:28:39.679905 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:28:39.679934 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:28:39.679946 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:28:39.680027 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:28:39.681635 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:28:39.682077 | orchestrator | changed: [testbed-manager] 2025-06-02 00:28:39.682743 | orchestrator | 2025-06-02 00:28:39.685072 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-02 00:28:39.685101 | orchestrator | Monday 02 June 2025 00:28:39 +0000 (0:00:10.692) 0:03:07.555 *********** 2025-06-02 00:28:40.018409 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-02 00:28:40.019073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-02 00:28:40.019995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-02 00:28:40.020599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-02 00:28:40.021355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-02 00:28:40.023685 | orchestrator | 2025-06-02 00:28:40.023714 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-02 00:28:40.023728 | orchestrator | Monday 02 June 2025 00:28:40 +0000 (0:00:00.342) 0:03:07.898 *********** 2025-06-02 00:28:40.050942 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 00:28:40.079702 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 00:28:40.115886 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:28:40.116502 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 00:28:40.138760 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:28:40.169044 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 00:28:40.169779 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:28:40.196913 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:28:40.694906 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 00:28:40.695265 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 00:28:40.696490 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 00:28:40.696919 | orchestrator | 2025-06-02 00:28:40.697352 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-02 00:28:40.698330 | orchestrator | Monday 02 June 2025 00:28:40 +0000 (0:00:00.675) 0:03:08.573 *********** 2025-06-02 00:28:40.761566 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 00:28:40.761670 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 00:28:40.796487 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 00:28:40.796587 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 00:28:40.796702 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 00:28:40.797098 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 00:28:40.797419 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 00:28:40.797921 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 00:28:40.798254 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 00:28:40.798749 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 00:28:40.798957 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 00:28:40.799274 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 00:28:40.799644 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 00:28:40.799929 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 00:28:40.800401 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 00:28:40.800684 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 00:28:40.801000 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 00:28:40.802933 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 00:28:40.802964 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 00:28:40.805479 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 00:28:40.807685 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 00:28:40.839185 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 00:28:40.839891 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:28:40.840628 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 00:28:40.840981 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 00:28:40.841970 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 00:28:40.844588 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 00:28:40.844635 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 00:28:40.845025 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 00:28:40.845674 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 00:28:40.847273 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 00:28:40.884591 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:28:40.884846 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 00:28:40.885180 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 00:28:40.885648 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 00:28:40.885743 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 00:28:40.885953 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 00:28:40.886274 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 00:28:40.886415 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 00:28:40.886695 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 00:28:40.887033 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 00:28:40.887307 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 00:28:40.907705 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:28:46.447584 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:28:46.448069 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 00:28:46.450000 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 00:28:46.451113 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 00:28:46.451978 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 00:28:46.453368 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 00:28:46.454101 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 00:28:46.454907 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 00:28:46.456501 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 00:28:46.457630 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 00:28:46.458740 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 00:28:46.459365 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 00:28:46.459758 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 00:28:46.460420 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 00:28:46.461044 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 00:28:46.461548 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 00:28:46.462373 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 00:28:46.462519 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 00:28:46.463107 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 00:28:46.464187 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 00:28:46.464372 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 00:28:46.465255 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 00:28:46.466233 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 00:28:46.466879 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 00:28:46.468063 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 00:28:46.468387 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 00:28:46.470836 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 00:28:46.471223 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 00:28:46.471463 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 00:28:46.473789 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 00:28:46.473867 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 00:28:46.473880 | orchestrator | 2025-06-02 00:28:46.473892 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-02 00:28:46.473903 | orchestrator | Monday 02 June 2025 00:28:46 +0000 (0:00:05.750) 0:03:14.323 *********** 2025-06-02 00:28:47.996902 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 00:28:47.998907 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 00:28:47.998964 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 00:28:47.999475 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 00:28:48.000653 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 00:28:48.001280 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 00:28:48.002227 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 00:28:48.003183 | orchestrator | 2025-06-02 00:28:48.003466 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-02 00:28:48.004599 | orchestrator | Monday 02 June 2025 00:28:47 +0000 (0:00:01.551) 0:03:15.875 *********** 2025-06-02 00:28:48.051705 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 00:28:48.074678 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:28:48.151494 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 00:28:48.152354 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 00:28:48.486513 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:28:48.489503 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:28:48.490522 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 00:28:48.491527 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:28:48.492258 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 00:28:48.493029 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 00:28:48.493704 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 00:28:48.494371 | orchestrator | 2025-06-02 00:28:48.495258 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-02 00:28:48.495929 | orchestrator | Monday 02 June 2025 00:28:48 +0000 (0:00:00.490) 0:03:16.366 *********** 2025-06-02 00:28:48.539773 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 00:28:48.567553 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:28:48.637096 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 00:28:48.637204 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 00:28:49.017310 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:28:49.017665 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:28:49.018661 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 00:28:49.020066 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:28:49.020404 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 00:28:49.021305 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 00:28:49.021928 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 00:28:49.023021 | orchestrator | 2025-06-02 00:28:49.023269 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-02 00:28:49.024138 | orchestrator | Monday 02 June 2025 00:28:49 +0000 (0:00:00.530) 0:03:16.896 *********** 2025-06-02 00:28:49.098296 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:28:49.117593 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:28:49.142945 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:28:49.161773 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:28:49.274081 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:28:49.275111 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:28:49.276614 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:28:49.277237 | orchestrator | 2025-06-02 00:28:49.278381 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-02 00:28:49.279675 | orchestrator | Monday 02 June 2025 00:28:49 +0000 (0:00:00.257) 0:03:17.153 *********** 2025-06-02 00:28:54.842614 | orchestrator | ok: [testbed-manager] 2025-06-02 00:28:54.843411 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:28:54.843794 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:28:54.844294 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:28:54.845199 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:28:54.846094 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:28:54.846321 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:28:54.848739 | orchestrator | 2025-06-02 00:28:54.848842 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-02 00:28:54.849394 | orchestrator | Monday 02 June 2025 00:28:54 +0000 (0:00:05.568) 0:03:22.722 *********** 2025-06-02 00:28:54.918246 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-02 00:28:54.918434 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-02 00:28:54.948554 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:28:54.984106 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-02 00:28:54.984150 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:28:55.017633 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-02 00:28:55.018971 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:28:55.019948 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-02 00:28:55.047069 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:28:55.110194 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:28:55.110247 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-02 00:28:55.110551 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:28:55.111296 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-02 00:28:55.111544 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:28:55.111682 | orchestrator | 2025-06-02 00:28:55.112358 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-02 00:28:55.114008 | orchestrator | Monday 02 June 2025 00:28:55 +0000 (0:00:00.269) 0:03:22.991 *********** 2025-06-02 00:28:56.107970 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-02 00:28:56.109950 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-02 00:28:56.113057 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-02 00:28:56.113075 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-02 00:28:56.113082 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-02 00:28:56.113088 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-02 00:28:56.113095 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-02 00:28:56.115874 | orchestrator | 2025-06-02 00:28:56.117244 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-02 00:28:56.118176 | orchestrator | Monday 02 June 2025 00:28:56 +0000 (0:00:00.992) 0:03:23.983 *********** 2025-06-02 00:28:56.566704 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:28:56.566922 | orchestrator | 2025-06-02 00:28:56.567151 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-02 00:28:56.567725 | orchestrator | Monday 02 June 2025 00:28:56 +0000 (0:00:00.461) 0:03:24.445 *********** 2025-06-02 00:28:57.727958 | orchestrator | ok: [testbed-manager] 2025-06-02 00:28:57.728145 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:28:57.729016 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:28:57.731412 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:28:57.731444 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:28:57.731456 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:28:57.732051 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:28:57.732854 | orchestrator | 2025-06-02 00:28:57.733586 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-02 00:28:57.734272 | orchestrator | Monday 02 June 2025 00:28:57 +0000 (0:00:01.159) 0:03:25.605 *********** 2025-06-02 00:28:58.350126 | orchestrator | ok: [testbed-manager] 2025-06-02 00:28:58.350694 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:28:58.354178 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:28:58.355406 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:28:58.356077 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:28:58.356854 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:28:58.359211 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:28:58.359261 | orchestrator | 2025-06-02 00:28:58.359275 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-02 00:28:58.359288 | orchestrator | Monday 02 June 2025 00:28:58 +0000 (0:00:00.624) 0:03:26.229 *********** 2025-06-02 00:28:58.942384 | orchestrator | changed: [testbed-manager] 2025-06-02 00:28:58.942482 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:28:58.943195 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:28:58.944286 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:28:58.944982 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:28:58.945672 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:28:58.946694 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:28:58.947328 | orchestrator | 2025-06-02 00:28:58.947805 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-02 00:28:58.948809 | orchestrator | Monday 02 June 2025 00:28:58 +0000 (0:00:00.590) 0:03:26.820 *********** 2025-06-02 00:28:59.539456 | orchestrator | ok: [testbed-manager] 2025-06-02 00:28:59.539628 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:28:59.540925 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:28:59.541760 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:28:59.542720 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:28:59.543601 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:28:59.544450 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:28:59.545161 | orchestrator | 2025-06-02 00:28:59.545913 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-02 00:28:59.546541 | orchestrator | Monday 02 June 2025 00:28:59 +0000 (0:00:00.597) 0:03:27.417 *********** 2025-06-02 00:29:00.444378 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748822634.8630667, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.444997 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748822688.6493323, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.445574 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748822701.5649807, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.446462 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748822707.1167903, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.446861 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748822687.0765293, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.447439 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748822687.226801, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.448311 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748822689.4400122, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.449105 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748822665.163605, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.450284 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748822589.0792794, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.450510 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748822597.7799377, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.450941 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748822603.5924523, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.451596 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748822590.6903422, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.452141 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748822586.2866657, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.452605 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748822586.6256442, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:29:00.453013 | orchestrator | 2025-06-02 00:29:00.453438 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-02 00:29:00.453855 | orchestrator | Monday 02 June 2025 00:29:00 +0000 (0:00:00.907) 0:03:28.324 *********** 2025-06-02 00:29:01.540586 | orchestrator | changed: [testbed-manager] 2025-06-02 00:29:01.542603 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:29:01.544272 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:29:01.545305 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:29:01.546121 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:29:01.547682 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:29:01.548682 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:29:01.549623 | orchestrator | 2025-06-02 00:29:01.550376 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-02 00:29:01.551175 | orchestrator | Monday 02 June 2025 00:29:01 +0000 (0:00:01.091) 0:03:29.415 *********** 2025-06-02 00:29:02.697617 | orchestrator | changed: [testbed-manager] 2025-06-02 00:29:02.698752 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:29:02.700651 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:29:02.702338 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:29:02.703375 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:29:02.704443 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:29:02.705765 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:29:02.706931 | orchestrator | 2025-06-02 00:29:02.707691 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-02 00:29:02.708515 | orchestrator | Monday 02 June 2025 00:29:02 +0000 (0:00:01.158) 0:03:30.574 *********** 2025-06-02 00:29:03.775002 | orchestrator | changed: [testbed-manager] 2025-06-02 00:29:03.775628 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:29:03.777548 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:29:03.778396 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:29:03.779259 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:29:03.780019 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:29:03.780948 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:29:03.781943 | orchestrator | 2025-06-02 00:29:03.783264 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-02 00:29:03.783743 | orchestrator | Monday 02 June 2025 00:29:03 +0000 (0:00:01.078) 0:03:31.652 *********** 2025-06-02 00:29:03.874215 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:29:03.928222 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:29:03.965516 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:29:04.004048 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:29:04.078384 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:29:04.079436 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:29:04.080409 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:29:04.081576 | orchestrator | 2025-06-02 00:29:04.082392 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-02 00:29:04.083732 | orchestrator | Monday 02 June 2025 00:29:04 +0000 (0:00:00.304) 0:03:31.957 *********** 2025-06-02 00:29:04.796709 | orchestrator | ok: [testbed-manager] 2025-06-02 00:29:04.799388 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:29:04.799483 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:29:04.799498 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:29:04.800041 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:29:04.801413 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:29:04.802575 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:29:04.803878 | orchestrator | 2025-06-02 00:29:04.804629 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-02 00:29:04.806000 | orchestrator | Monday 02 June 2025 00:29:04 +0000 (0:00:00.716) 0:03:32.673 *********** 2025-06-02 00:29:05.189220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:29:05.190884 | orchestrator | 2025-06-02 00:29:05.192640 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-02 00:29:05.192949 | orchestrator | Monday 02 June 2025 00:29:05 +0000 (0:00:00.394) 0:03:33.068 *********** 2025-06-02 00:29:12.759239 | orchestrator | ok: [testbed-manager] 2025-06-02 00:29:12.759363 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:29:12.760959 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:29:12.763171 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:29:12.763647 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:29:12.764126 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:29:12.764973 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:29:12.765963 | orchestrator | 2025-06-02 00:29:12.767556 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-02 00:29:12.767772 | orchestrator | Monday 02 June 2025 00:29:12 +0000 (0:00:07.568) 0:03:40.636 *********** 2025-06-02 00:29:13.978733 | orchestrator | ok: [testbed-manager] 2025-06-02 00:29:13.978953 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:29:13.979579 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:29:13.980071 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:29:13.981625 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:29:13.982318 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:29:13.983300 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:29:13.984005 | orchestrator | 2025-06-02 00:29:13.984806 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-02 00:29:13.985261 | orchestrator | Monday 02 June 2025 00:29:13 +0000 (0:00:01.220) 0:03:41.857 *********** 2025-06-02 00:29:15.008822 | orchestrator | ok: [testbed-manager] 2025-06-02 00:29:15.008977 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:29:15.009060 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:29:15.009650 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:29:15.010529 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:29:15.011350 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:29:15.012310 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:29:15.013093 | orchestrator | 2025-06-02 00:29:15.013966 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-02 00:29:15.015033 | orchestrator | Monday 02 June 2025 00:29:14 +0000 (0:00:01.025) 0:03:42.883 *********** 2025-06-02 00:29:15.499470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:29:15.499935 | orchestrator | 2025-06-02 00:29:15.501300 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-02 00:29:15.501973 | orchestrator | Monday 02 June 2025 00:29:15 +0000 (0:00:00.495) 0:03:43.378 *********** 2025-06-02 00:29:23.663651 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:29:23.665625 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:29:23.666485 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:29:23.667761 | orchestrator | changed: [testbed-manager] 2025-06-02 00:29:23.669346 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:29:23.671060 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:29:23.671939 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:29:23.672943 | orchestrator | 2025-06-02 00:29:23.674092 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-02 00:29:23.674901 | orchestrator | Monday 02 June 2025 00:29:23 +0000 (0:00:08.163) 0:03:51.542 *********** 2025-06-02 00:29:24.261533 | orchestrator | changed: [testbed-manager] 2025-06-02 00:29:24.262320 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:29:24.263629 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:29:24.265505 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:29:24.266803 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:29:24.268427 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:29:24.269168 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:29:24.270348 | orchestrator | 2025-06-02 00:29:24.271148 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-02 00:29:24.272077 | orchestrator | Monday 02 June 2025 00:29:24 +0000 (0:00:00.598) 0:03:52.140 *********** 2025-06-02 00:29:25.353158 | orchestrator | changed: [testbed-manager] 2025-06-02 00:29:25.356733 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:29:25.357292 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:29:25.358080 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:29:25.358714 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:29:25.359159 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:29:25.359626 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:29:25.360288 | orchestrator | 2025-06-02 00:29:25.360623 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-02 00:29:25.361282 | orchestrator | Monday 02 June 2025 00:29:25 +0000 (0:00:01.091) 0:03:53.232 *********** 2025-06-02 00:29:26.384153 | orchestrator | changed: [testbed-manager] 2025-06-02 00:29:26.385541 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:29:26.387106 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:29:26.388093 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:29:26.388828 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:29:26.389610 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:29:26.390082 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:29:26.390829 | orchestrator | 2025-06-02 00:29:26.391384 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-02 00:29:26.391810 | orchestrator | Monday 02 June 2025 00:29:26 +0000 (0:00:01.030) 0:03:54.263 *********** 2025-06-02 00:29:26.494533 | orchestrator | ok: [testbed-manager] 2025-06-02 00:29:26.535716 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:29:26.566157 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:29:26.604891 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:29:26.683069 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:29:26.683211 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:29:26.684195 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:29:26.685345 | orchestrator | 2025-06-02 00:29:26.685821 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-02 00:29:26.686691 | orchestrator | Monday 02 June 2025 00:29:26 +0000 (0:00:00.299) 0:03:54.563 *********** 2025-06-02 00:29:26.801430 | orchestrator | ok: [testbed-manager] 2025-06-02 00:29:26.838506 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:29:26.879234 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:29:26.916178 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:29:26.983222 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:29:26.983732 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:29:26.986964 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:29:26.986989 | orchestrator | 2025-06-02 00:29:26.987002 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-02 00:29:26.987254 | orchestrator | Monday 02 June 2025 00:29:26 +0000 (0:00:00.299) 0:03:54.862 *********** 2025-06-02 00:29:27.079472 | orchestrator | ok: [testbed-manager] 2025-06-02 00:29:27.111717 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:29:27.145215 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:29:27.177360 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:29:27.261821 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:29:27.263069 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:29:27.265006 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:29:27.265220 | orchestrator | 2025-06-02 00:29:27.266382 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-02 00:29:27.267229 | orchestrator | Monday 02 June 2025 00:29:27 +0000 (0:00:00.278) 0:03:55.141 *********** 2025-06-02 00:29:33.019684 | orchestrator | ok: [testbed-manager] 2025-06-02 00:29:33.019805 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:29:33.020549 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:29:33.024048 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:29:33.024094 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:29:33.024106 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:29:33.024117 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:29:33.024795 | orchestrator | 2025-06-02 00:29:33.025432 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-02 00:29:33.025869 | orchestrator | Monday 02 June 2025 00:29:33 +0000 (0:00:05.756) 0:04:00.898 *********** 2025-06-02 00:29:33.394696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:29:33.396995 | orchestrator | 2025-06-02 00:29:33.397074 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-02 00:29:33.397139 | orchestrator | Monday 02 June 2025 00:29:33 +0000 (0:00:00.373) 0:04:01.271 *********** 2025-06-02 00:29:33.473914 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-02 00:29:33.474149 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-02 00:29:33.474240 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-02 00:29:33.475212 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-02 00:29:33.505782 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:29:33.562257 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:29:33.562668 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-02 00:29:33.563235 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-02 00:29:33.563970 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-02 00:29:33.564628 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-02 00:29:33.594670 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:29:33.640771 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:29:33.641999 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-02 00:29:33.645052 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-02 00:29:33.645082 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-02 00:29:33.723929 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-02 00:29:33.724173 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:29:33.724471 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:29:33.724882 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-02 00:29:33.725234 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-02 00:29:33.725257 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:29:33.725570 | orchestrator | 2025-06-02 00:29:33.725946 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-02 00:29:33.726215 | orchestrator | Monday 02 June 2025 00:29:33 +0000 (0:00:00.332) 0:04:01.604 *********** 2025-06-02 00:29:34.093248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:29:34.094097 | orchestrator | 2025-06-02 00:29:34.101241 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-02 00:29:34.106416 | orchestrator | Monday 02 June 2025 00:29:34 +0000 (0:00:00.368) 0:04:01.972 *********** 2025-06-02 00:29:34.183918 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-02 00:29:34.226483 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-02 00:29:34.227164 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:29:34.228078 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-02 00:29:34.261491 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:29:34.300188 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:29:34.304054 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-02 00:29:34.304081 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-02 00:29:34.339269 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:29:34.423526 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:29:34.425226 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-02 00:29:34.425706 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:29:34.426685 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-02 00:29:34.426956 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:29:34.427414 | orchestrator | 2025-06-02 00:29:34.427916 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-02 00:29:34.428228 | orchestrator | Monday 02 June 2025 00:29:34 +0000 (0:00:00.330) 0:04:02.302 *********** 2025-06-02 00:29:34.912290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:29:34.912400 | orchestrator | 2025-06-02 00:29:34.913722 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-02 00:29:34.914064 | orchestrator | Monday 02 June 2025 00:29:34 +0000 (0:00:00.484) 0:04:02.787 *********** 2025-06-02 00:30:08.262613 | orchestrator | changed: [testbed-manager] 2025-06-02 00:30:08.262736 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:30:08.262752 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:30:08.263023 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:30:08.268281 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:30:08.268664 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:30:08.269609 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:30:08.270987 | orchestrator | 2025-06-02 00:30:08.273158 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-02 00:30:08.273193 | orchestrator | Monday 02 June 2025 00:30:08 +0000 (0:00:33.351) 0:04:36.138 *********** 2025-06-02 00:30:16.304380 | orchestrator | changed: [testbed-manager] 2025-06-02 00:30:16.304804 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:30:16.306677 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:30:16.310790 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:30:16.310838 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:30:16.311525 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:30:16.312508 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:30:16.313156 | orchestrator | 2025-06-02 00:30:16.313931 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-02 00:30:16.314670 | orchestrator | Monday 02 June 2025 00:30:16 +0000 (0:00:08.042) 0:04:44.181 *********** 2025-06-02 00:30:23.795028 | orchestrator | changed: [testbed-manager] 2025-06-02 00:30:23.795248 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:30:23.795313 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:30:23.796676 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:30:23.799113 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:30:23.800236 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:30:23.801373 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:30:23.802463 | orchestrator | 2025-06-02 00:30:23.803843 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-02 00:30:23.804985 | orchestrator | Monday 02 June 2025 00:30:23 +0000 (0:00:07.490) 0:04:51.671 *********** 2025-06-02 00:30:25.508328 | orchestrator | ok: [testbed-manager] 2025-06-02 00:30:25.508685 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:30:25.509551 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:30:25.509930 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:30:25.510708 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:30:25.511137 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:30:25.511559 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:30:25.512291 | orchestrator | 2025-06-02 00:30:25.512582 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-02 00:30:25.513456 | orchestrator | Monday 02 June 2025 00:30:25 +0000 (0:00:01.715) 0:04:53.386 *********** 2025-06-02 00:30:30.861733 | orchestrator | changed: [testbed-manager] 2025-06-02 00:30:30.861846 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:30:30.861862 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:30:30.861873 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:30:30.861884 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:30:30.862966 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:30:30.863196 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:30:30.865084 | orchestrator | 2025-06-02 00:30:30.866158 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-02 00:30:30.867142 | orchestrator | Monday 02 June 2025 00:30:30 +0000 (0:00:05.348) 0:04:58.735 *********** 2025-06-02 00:30:31.263438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:30:31.263559 | orchestrator | 2025-06-02 00:30:31.263584 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-02 00:30:31.263605 | orchestrator | Monday 02 June 2025 00:30:31 +0000 (0:00:00.404) 0:04:59.140 *********** 2025-06-02 00:30:31.951405 | orchestrator | changed: [testbed-manager] 2025-06-02 00:30:31.952745 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:30:31.953416 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:30:31.955257 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:30:31.956534 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:30:31.958236 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:30:31.959937 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:30:31.960753 | orchestrator | 2025-06-02 00:30:31.962092 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-02 00:30:31.963926 | orchestrator | Monday 02 June 2025 00:30:31 +0000 (0:00:00.688) 0:04:59.828 *********** 2025-06-02 00:30:33.683774 | orchestrator | ok: [testbed-manager] 2025-06-02 00:30:33.684258 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:30:33.685359 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:30:33.686109 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:30:33.686815 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:30:33.687428 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:30:33.688072 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:30:33.688609 | orchestrator | 2025-06-02 00:30:33.689347 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-02 00:30:33.689801 | orchestrator | Monday 02 June 2025 00:30:33 +0000 (0:00:01.732) 0:05:01.560 *********** 2025-06-02 00:30:34.431181 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:30:34.433291 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:30:34.434113 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:30:34.435021 | orchestrator | changed: [testbed-manager] 2025-06-02 00:30:34.436491 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:30:34.437283 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:30:34.438143 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:30:34.438654 | orchestrator | 2025-06-02 00:30:34.439739 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-02 00:30:34.440446 | orchestrator | Monday 02 June 2025 00:30:34 +0000 (0:00:00.748) 0:05:02.309 *********** 2025-06-02 00:30:34.526230 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:30:34.571887 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:30:34.606237 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:30:34.638375 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:30:34.688299 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:30:34.689113 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:30:34.690359 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:30:34.690658 | orchestrator | 2025-06-02 00:30:34.691465 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-02 00:30:34.692490 | orchestrator | Monday 02 June 2025 00:30:34 +0000 (0:00:00.258) 0:05:02.568 *********** 2025-06-02 00:30:34.751657 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:30:34.781170 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:30:34.810995 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:30:34.841651 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:30:34.870883 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:30:35.034651 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:30:35.038115 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:30:35.039075 | orchestrator | 2025-06-02 00:30:35.039397 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-02 00:30:35.040357 | orchestrator | Monday 02 June 2025 00:30:35 +0000 (0:00:00.343) 0:05:02.911 *********** 2025-06-02 00:30:35.138742 | orchestrator | ok: [testbed-manager] 2025-06-02 00:30:35.172973 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:30:35.205246 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:30:35.244566 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:30:35.332190 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:30:35.332390 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:30:35.334514 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:30:35.336014 | orchestrator | 2025-06-02 00:30:35.336779 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-02 00:30:35.337533 | orchestrator | Monday 02 June 2025 00:30:35 +0000 (0:00:00.300) 0:05:03.211 *********** 2025-06-02 00:30:35.431450 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:30:35.464118 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:30:35.496203 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:30:35.532021 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:30:35.600186 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:30:35.600887 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:30:35.602118 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:30:35.602834 | orchestrator | 2025-06-02 00:30:35.603675 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-02 00:30:35.604598 | orchestrator | Monday 02 June 2025 00:30:35 +0000 (0:00:00.268) 0:05:03.480 *********** 2025-06-02 00:30:35.708941 | orchestrator | ok: [testbed-manager] 2025-06-02 00:30:35.745553 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:30:35.794879 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:30:35.828282 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:30:35.903611 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:30:35.903995 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:30:35.904681 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:30:35.905661 | orchestrator | 2025-06-02 00:30:35.906922 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-02 00:30:35.906948 | orchestrator | Monday 02 June 2025 00:30:35 +0000 (0:00:00.303) 0:05:03.783 *********** 2025-06-02 00:30:36.007189 | orchestrator | ok: [testbed-manager] => { 2025-06-02 00:30:36.007728 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 00:30:36.008839 | orchestrator | } 2025-06-02 00:30:36.036785 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 00:30:36.037495 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 00:30:36.038292 | orchestrator | } 2025-06-02 00:30:36.071154 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 00:30:36.071281 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 00:30:36.072459 | orchestrator | } 2025-06-02 00:30:36.106238 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 00:30:36.106973 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 00:30:36.107648 | orchestrator | } 2025-06-02 00:30:36.154514 | orchestrator | ok: [testbed-node-0] => { 2025-06-02 00:30:36.154710 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 00:30:36.155893 | orchestrator | } 2025-06-02 00:30:36.157179 | orchestrator | ok: [testbed-node-1] => { 2025-06-02 00:30:36.157705 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 00:30:36.158698 | orchestrator | } 2025-06-02 00:30:36.159301 | orchestrator | ok: [testbed-node-2] => { 2025-06-02 00:30:36.160760 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 00:30:36.163190 | orchestrator | } 2025-06-02 00:30:36.164324 | orchestrator | 2025-06-02 00:30:36.164778 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-02 00:30:36.165723 | orchestrator | Monday 02 June 2025 00:30:36 +0000 (0:00:00.251) 0:05:04.035 *********** 2025-06-02 00:30:36.213514 | orchestrator | ok: [testbed-manager] => { 2025-06-02 00:30:36.214179 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 00:30:36.215104 | orchestrator | } 2025-06-02 00:30:36.267428 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 00:30:36.267643 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 00:30:36.268404 | orchestrator | } 2025-06-02 00:30:36.391880 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 00:30:36.392480 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 00:30:36.393092 | orchestrator | } 2025-06-02 00:30:36.426094 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 00:30:36.426611 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 00:30:36.427568 | orchestrator | } 2025-06-02 00:30:36.457893 | orchestrator | ok: [testbed-node-0] => { 2025-06-02 00:30:36.459471 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 00:30:36.460042 | orchestrator | } 2025-06-02 00:30:36.513727 | orchestrator | ok: [testbed-node-1] => { 2025-06-02 00:30:36.515001 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 00:30:36.515698 | orchestrator | } 2025-06-02 00:30:36.516932 | orchestrator | ok: [testbed-node-2] => { 2025-06-02 00:30:36.517452 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 00:30:36.518437 | orchestrator | } 2025-06-02 00:30:36.519323 | orchestrator | 2025-06-02 00:30:36.520098 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-02 00:30:36.520808 | orchestrator | Monday 02 June 2025 00:30:36 +0000 (0:00:00.358) 0:05:04.393 *********** 2025-06-02 00:30:36.613454 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:30:36.647343 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:30:36.677054 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:30:36.704282 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:30:36.753552 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:30:36.754302 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:30:36.755181 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:30:36.756071 | orchestrator | 2025-06-02 00:30:36.757009 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-02 00:30:36.757814 | orchestrator | Monday 02 June 2025 00:30:36 +0000 (0:00:00.240) 0:05:04.633 *********** 2025-06-02 00:30:36.854444 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:30:36.884182 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:30:36.913459 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:30:36.942421 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:30:36.991969 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:30:36.993170 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:30:36.994674 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:30:36.994940 | orchestrator | 2025-06-02 00:30:36.995929 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-02 00:30:36.997024 | orchestrator | Monday 02 June 2025 00:30:36 +0000 (0:00:00.236) 0:05:04.870 *********** 2025-06-02 00:30:37.388656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:30:37.389075 | orchestrator | 2025-06-02 00:30:37.389992 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-02 00:30:37.390606 | orchestrator | Monday 02 June 2025 00:30:37 +0000 (0:00:00.397) 0:05:05.268 *********** 2025-06-02 00:30:38.163778 | orchestrator | ok: [testbed-manager] 2025-06-02 00:30:38.164053 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:30:38.164630 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:30:38.165522 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:30:38.166319 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:30:38.166799 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:30:38.167325 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:30:38.168644 | orchestrator | 2025-06-02 00:30:38.169290 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-02 00:30:38.169867 | orchestrator | Monday 02 June 2025 00:30:38 +0000 (0:00:00.773) 0:05:06.041 *********** 2025-06-02 00:30:40.782997 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:30:40.783248 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:30:40.784417 | orchestrator | ok: [testbed-manager] 2025-06-02 00:30:40.788371 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:30:40.789329 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:30:40.790527 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:30:40.791646 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:30:40.792824 | orchestrator | 2025-06-02 00:30:40.794114 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-02 00:30:40.794761 | orchestrator | Monday 02 June 2025 00:30:40 +0000 (0:00:02.619) 0:05:08.661 *********** 2025-06-02 00:30:40.855309 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-02 00:30:40.855471 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-02 00:30:40.929960 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-02 00:30:40.930561 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-02 00:30:40.931401 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-02 00:30:41.007370 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-02 00:30:41.007876 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:30:41.008413 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-02 00:30:41.009565 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-02 00:30:41.010104 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-02 00:30:41.200247 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:30:41.201356 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-02 00:30:41.202094 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-02 00:30:41.203518 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-02 00:30:41.281498 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:30:41.281582 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-02 00:30:41.282563 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-02 00:30:41.283452 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-02 00:30:41.364881 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:30:41.365025 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-02 00:30:41.365040 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-02 00:30:41.365052 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-02 00:30:41.499116 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:30:41.499827 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:30:41.507274 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-02 00:30:41.507335 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-02 00:30:41.507356 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-02 00:30:41.507375 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:30:41.507393 | orchestrator | 2025-06-02 00:30:41.507436 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-02 00:30:41.507465 | orchestrator | Monday 02 June 2025 00:30:41 +0000 (0:00:00.716) 0:05:09.378 *********** 2025-06-02 00:30:47.684578 | orchestrator | ok: [testbed-manager] 2025-06-02 00:30:47.684691 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:30:47.684707 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:30:47.684716 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:30:47.684785 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:30:47.685177 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:30:47.686527 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:30:47.686875 | orchestrator | 2025-06-02 00:30:47.687422 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-02 00:30:47.687819 | orchestrator | Monday 02 June 2025 00:30:47 +0000 (0:00:06.181) 0:05:15.559 *********** 2025-06-02 00:30:48.695069 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:30:48.695447 | orchestrator | ok: [testbed-manager] 2025-06-02 00:30:48.695478 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:30:48.697109 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:30:48.699071 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:30:48.699887 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:30:48.700619 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:30:48.701599 | orchestrator | 2025-06-02 00:30:48.702125 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-02 00:30:48.703415 | orchestrator | Monday 02 June 2025 00:30:48 +0000 (0:00:01.012) 0:05:16.572 *********** 2025-06-02 00:30:56.458107 | orchestrator | ok: [testbed-manager] 2025-06-02 00:30:56.458289 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:30:56.459531 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:30:56.460559 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:30:56.461957 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:30:56.462695 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:30:56.464137 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:30:56.466248 | orchestrator | 2025-06-02 00:30:56.467424 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-02 00:30:56.468221 | orchestrator | Monday 02 June 2025 00:30:56 +0000 (0:00:07.763) 0:05:24.336 *********** 2025-06-02 00:30:59.570637 | orchestrator | changed: [testbed-manager] 2025-06-02 00:30:59.570839 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:30:59.571228 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:30:59.573158 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:30:59.573744 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:30:59.574604 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:30:59.575995 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:30:59.576947 | orchestrator | 2025-06-02 00:30:59.577646 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-02 00:30:59.578406 | orchestrator | Monday 02 June 2025 00:30:59 +0000 (0:00:03.109) 0:05:27.446 *********** 2025-06-02 00:31:00.370405 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:01.267256 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:01.267965 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:01.268404 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:01.277323 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:01.277413 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:01.277424 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:01.277431 | orchestrator | 2025-06-02 00:31:01.277439 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-02 00:31:01.277446 | orchestrator | Monday 02 June 2025 00:31:01 +0000 (0:00:01.695) 0:05:29.141 *********** 2025-06-02 00:31:02.552538 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:02.552655 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:02.552670 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:02.553052 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:02.553238 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:02.554156 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:02.554691 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:02.555619 | orchestrator | 2025-06-02 00:31:02.556156 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-02 00:31:02.556908 | orchestrator | Monday 02 June 2025 00:31:02 +0000 (0:00:01.284) 0:05:30.426 *********** 2025-06-02 00:31:02.750103 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:31:02.831678 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:31:02.889437 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:31:02.956076 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:31:03.154253 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:31:03.155082 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:31:03.156531 | orchestrator | changed: [testbed-manager] 2025-06-02 00:31:03.157477 | orchestrator | 2025-06-02 00:31:03.158013 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-02 00:31:03.158726 | orchestrator | Monday 02 June 2025 00:31:03 +0000 (0:00:00.608) 0:05:31.034 *********** 2025-06-02 00:31:12.571529 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:12.571643 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:12.571661 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:12.572838 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:12.573419 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:12.575123 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:12.575771 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:12.576049 | orchestrator | 2025-06-02 00:31:12.576996 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-02 00:31:12.577903 | orchestrator | Monday 02 June 2025 00:31:12 +0000 (0:00:09.412) 0:05:40.446 *********** 2025-06-02 00:31:13.505585 | orchestrator | changed: [testbed-manager] 2025-06-02 00:31:13.507503 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:13.508110 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:13.508472 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:13.509848 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:13.509908 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:13.509976 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:13.510388 | orchestrator | 2025-06-02 00:31:13.510572 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-02 00:31:13.511797 | orchestrator | Monday 02 June 2025 00:31:13 +0000 (0:00:00.937) 0:05:41.384 *********** 2025-06-02 00:31:21.958385 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:21.958507 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:21.958525 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:21.959278 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:21.960226 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:21.960612 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:21.961247 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:21.961743 | orchestrator | 2025-06-02 00:31:21.962424 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-02 00:31:21.962833 | orchestrator | Monday 02 June 2025 00:31:21 +0000 (0:00:08.448) 0:05:49.832 *********** 2025-06-02 00:31:32.258234 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:32.260058 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:32.260093 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:32.260105 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:32.260691 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:32.261574 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:32.262596 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:32.263413 | orchestrator | 2025-06-02 00:31:32.264056 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-02 00:31:32.265176 | orchestrator | Monday 02 June 2025 00:31:32 +0000 (0:00:10.300) 0:06:00.133 *********** 2025-06-02 00:31:32.670123 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-02 00:31:33.442086 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-02 00:31:33.442318 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-02 00:31:33.442737 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-02 00:31:33.443459 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-02 00:31:33.443858 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-02 00:31:33.444569 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-02 00:31:33.445191 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-02 00:31:33.445815 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-02 00:31:33.447059 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-02 00:31:33.448293 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-02 00:31:33.449034 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-02 00:31:33.449422 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-02 00:31:33.449850 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-02 00:31:33.450499 | orchestrator | 2025-06-02 00:31:33.451039 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-02 00:31:33.451731 | orchestrator | Monday 02 June 2025 00:31:33 +0000 (0:00:01.186) 0:06:01.319 *********** 2025-06-02 00:31:33.569650 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:31:33.628569 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:31:33.695450 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:31:33.754560 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:31:33.813726 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:31:33.921022 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:31:33.922336 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:31:33.922374 | orchestrator | 2025-06-02 00:31:33.923308 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-02 00:31:33.923799 | orchestrator | Monday 02 June 2025 00:31:33 +0000 (0:00:00.481) 0:06:01.801 *********** 2025-06-02 00:31:37.522338 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:37.522668 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:37.522881 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:37.523442 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:37.523820 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:37.524452 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:37.525626 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:37.530076 | orchestrator | 2025-06-02 00:31:37.530640 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-02 00:31:37.531107 | orchestrator | Monday 02 June 2025 00:31:37 +0000 (0:00:03.597) 0:06:05.399 *********** 2025-06-02 00:31:37.682176 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:31:37.742526 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:31:37.808148 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:31:37.878254 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:31:37.940478 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:31:38.036168 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:31:38.036676 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:31:38.037474 | orchestrator | 2025-06-02 00:31:38.038213 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-02 00:31:38.042288 | orchestrator | Monday 02 June 2025 00:31:38 +0000 (0:00:00.515) 0:06:05.914 *********** 2025-06-02 00:31:38.111126 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-02 00:31:38.111686 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-02 00:31:38.178401 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:31:38.179367 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-02 00:31:38.180010 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-02 00:31:38.245121 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:31:38.245624 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-02 00:31:38.246360 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-02 00:31:38.312240 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:31:38.312439 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-02 00:31:38.313107 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-02 00:31:38.375729 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:31:38.376398 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-02 00:31:38.377138 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-02 00:31:38.441658 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:31:38.442151 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-02 00:31:38.442825 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-02 00:31:38.550236 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:31:38.550410 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-02 00:31:38.551115 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-02 00:31:38.554239 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:31:38.554264 | orchestrator | 2025-06-02 00:31:38.554335 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-02 00:31:38.555058 | orchestrator | Monday 02 June 2025 00:31:38 +0000 (0:00:00.515) 0:06:06.429 *********** 2025-06-02 00:31:38.672016 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:31:38.739110 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:31:38.798596 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:31:38.857507 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:31:38.922204 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:31:39.004815 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:31:39.006199 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:31:39.009472 | orchestrator | 2025-06-02 00:31:39.009505 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-02 00:31:39.009518 | orchestrator | Monday 02 June 2025 00:31:38 +0000 (0:00:00.453) 0:06:06.883 *********** 2025-06-02 00:31:39.143353 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:31:39.207547 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:31:39.270326 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:31:39.346441 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:31:39.397726 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:31:39.513902 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:31:39.514388 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:31:39.515297 | orchestrator | 2025-06-02 00:31:39.516537 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-02 00:31:39.516653 | orchestrator | Monday 02 June 2025 00:31:39 +0000 (0:00:00.508) 0:06:07.391 *********** 2025-06-02 00:31:39.646331 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:31:39.707284 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:31:39.928514 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:31:39.991007 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:31:40.050503 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:31:40.170803 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:31:40.172026 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:31:40.173085 | orchestrator | 2025-06-02 00:31:40.174333 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-02 00:31:40.175637 | orchestrator | Monday 02 June 2025 00:31:40 +0000 (0:00:00.656) 0:06:08.048 *********** 2025-06-02 00:31:41.765463 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:41.770720 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:31:41.770765 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:31:41.770778 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:31:41.770791 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:31:41.770803 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:31:41.770815 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:31:41.770827 | orchestrator | 2025-06-02 00:31:41.771404 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-02 00:31:41.772019 | orchestrator | Monday 02 June 2025 00:31:41 +0000 (0:00:01.593) 0:06:09.642 *********** 2025-06-02 00:31:42.657491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:31:42.657926 | orchestrator | 2025-06-02 00:31:42.659085 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-02 00:31:42.660135 | orchestrator | Monday 02 June 2025 00:31:42 +0000 (0:00:00.891) 0:06:10.533 *********** 2025-06-02 00:31:43.105815 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:43.507336 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:43.507613 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:43.508195 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:43.508638 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:43.509337 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:43.512148 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:43.512172 | orchestrator | 2025-06-02 00:31:43.512186 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-02 00:31:43.512200 | orchestrator | Monday 02 June 2025 00:31:43 +0000 (0:00:00.852) 0:06:11.386 *********** 2025-06-02 00:31:43.960488 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:44.031077 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:44.530482 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:44.531186 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:44.532440 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:44.534825 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:44.534847 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:44.534852 | orchestrator | 2025-06-02 00:31:44.535608 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-02 00:31:44.536410 | orchestrator | Monday 02 June 2025 00:31:44 +0000 (0:00:01.023) 0:06:12.409 *********** 2025-06-02 00:31:45.845326 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:45.845679 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:45.846521 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:45.850195 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:45.850289 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:45.850304 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:45.850378 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:45.851417 | orchestrator | 2025-06-02 00:31:45.851712 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-02 00:31:45.852070 | orchestrator | Monday 02 June 2025 00:31:45 +0000 (0:00:01.314) 0:06:13.724 *********** 2025-06-02 00:31:45.970713 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:31:47.228219 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:31:47.228299 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:31:47.228307 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:31:47.228347 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:31:47.228749 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:31:47.229016 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:31:47.229452 | orchestrator | 2025-06-02 00:31:47.230300 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-02 00:31:47.230327 | orchestrator | Monday 02 June 2025 00:31:47 +0000 (0:00:01.381) 0:06:15.105 *********** 2025-06-02 00:31:48.518617 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:48.518727 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:48.518882 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:48.520035 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:48.522379 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:48.523762 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:48.524573 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:48.525365 | orchestrator | 2025-06-02 00:31:48.526211 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-02 00:31:48.526863 | orchestrator | Monday 02 June 2025 00:31:48 +0000 (0:00:01.289) 0:06:16.394 *********** 2025-06-02 00:31:50.014221 | orchestrator | changed: [testbed-manager] 2025-06-02 00:31:50.018351 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:50.018391 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:50.018403 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:50.018629 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:50.019934 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:50.020604 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:50.021609 | orchestrator | 2025-06-02 00:31:50.023019 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-02 00:31:50.023400 | orchestrator | Monday 02 June 2025 00:31:50 +0000 (0:00:01.497) 0:06:17.891 *********** 2025-06-02 00:31:50.795014 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:31:50.795462 | orchestrator | 2025-06-02 00:31:50.798467 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-02 00:31:50.798567 | orchestrator | Monday 02 June 2025 00:31:50 +0000 (0:00:00.779) 0:06:18.671 *********** 2025-06-02 00:31:52.088203 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:52.088363 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:31:52.088945 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:31:52.089391 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:31:52.091169 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:31:52.092139 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:31:52.092655 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:31:52.093396 | orchestrator | 2025-06-02 00:31:52.094391 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-02 00:31:52.094945 | orchestrator | Monday 02 June 2025 00:31:52 +0000 (0:00:01.293) 0:06:19.965 *********** 2025-06-02 00:31:53.195540 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:53.196551 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:31:53.197332 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:31:53.198175 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:31:53.199362 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:31:53.201902 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:31:53.202807 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:31:53.203380 | orchestrator | 2025-06-02 00:31:53.204141 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-02 00:31:53.204417 | orchestrator | Monday 02 June 2025 00:31:53 +0000 (0:00:01.106) 0:06:21.072 *********** 2025-06-02 00:31:54.504727 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:54.506178 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:31:54.507503 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:31:54.509989 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:31:54.510752 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:31:54.512450 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:31:54.513230 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:31:54.513741 | orchestrator | 2025-06-02 00:31:54.514662 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-02 00:31:54.515506 | orchestrator | Monday 02 June 2025 00:31:54 +0000 (0:00:01.309) 0:06:22.381 *********** 2025-06-02 00:31:55.614694 | orchestrator | ok: [testbed-manager] 2025-06-02 00:31:55.614824 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:31:55.614901 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:31:55.615591 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:31:55.615657 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:31:55.615998 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:31:55.617464 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:31:55.618350 | orchestrator | 2025-06-02 00:31:55.619135 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-02 00:31:55.620012 | orchestrator | Monday 02 June 2025 00:31:55 +0000 (0:00:01.109) 0:06:23.491 *********** 2025-06-02 00:31:56.703202 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:31:56.703399 | orchestrator | 2025-06-02 00:31:56.704458 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 00:31:56.704700 | orchestrator | Monday 02 June 2025 00:31:56 +0000 (0:00:00.819) 0:06:24.310 *********** 2025-06-02 00:31:56.705611 | orchestrator | 2025-06-02 00:31:56.706436 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 00:31:56.707055 | orchestrator | Monday 02 June 2025 00:31:56 +0000 (0:00:00.036) 0:06:24.347 *********** 2025-06-02 00:31:56.707524 | orchestrator | 2025-06-02 00:31:56.708320 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 00:31:56.709087 | orchestrator | Monday 02 June 2025 00:31:56 +0000 (0:00:00.041) 0:06:24.388 *********** 2025-06-02 00:31:56.709647 | orchestrator | 2025-06-02 00:31:56.710123 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 00:31:56.710433 | orchestrator | Monday 02 June 2025 00:31:56 +0000 (0:00:00.036) 0:06:24.425 *********** 2025-06-02 00:31:56.711160 | orchestrator | 2025-06-02 00:31:56.711760 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 00:31:56.712127 | orchestrator | Monday 02 June 2025 00:31:56 +0000 (0:00:00.036) 0:06:24.461 *********** 2025-06-02 00:31:56.712610 | orchestrator | 2025-06-02 00:31:56.713151 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 00:31:56.713555 | orchestrator | Monday 02 June 2025 00:31:56 +0000 (0:00:00.042) 0:06:24.504 *********** 2025-06-02 00:31:56.714253 | orchestrator | 2025-06-02 00:31:56.714600 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 00:31:56.714913 | orchestrator | Monday 02 June 2025 00:31:56 +0000 (0:00:00.036) 0:06:24.541 *********** 2025-06-02 00:31:56.715559 | orchestrator | 2025-06-02 00:31:56.716172 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 00:31:56.716544 | orchestrator | Monday 02 June 2025 00:31:56 +0000 (0:00:00.039) 0:06:24.580 *********** 2025-06-02 00:31:57.980590 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:31:57.981524 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:31:57.983949 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:31:57.984624 | orchestrator | 2025-06-02 00:31:57.985581 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-02 00:31:57.986528 | orchestrator | Monday 02 June 2025 00:31:57 +0000 (0:00:01.277) 0:06:25.857 *********** 2025-06-02 00:31:59.255620 | orchestrator | changed: [testbed-manager] 2025-06-02 00:31:59.256106 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:31:59.257080 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:31:59.258235 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:31:59.259551 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:31:59.260022 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:31:59.260633 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:31:59.261159 | orchestrator | 2025-06-02 00:31:59.261839 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-02 00:31:59.263359 | orchestrator | Monday 02 June 2025 00:31:59 +0000 (0:00:01.275) 0:06:27.132 *********** 2025-06-02 00:32:00.329897 | orchestrator | changed: [testbed-manager] 2025-06-02 00:32:00.331256 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:32:00.331294 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:32:00.333163 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:32:00.333735 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:32:00.335452 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:32:00.336050 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:32:00.336994 | orchestrator | 2025-06-02 00:32:00.338222 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-02 00:32:00.338936 | orchestrator | Monday 02 June 2025 00:32:00 +0000 (0:00:01.073) 0:06:28.206 *********** 2025-06-02 00:32:00.462374 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:32:02.507118 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:32:02.507291 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:32:02.508336 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:32:02.509726 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:32:02.510719 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:32:02.511544 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:32:02.512147 | orchestrator | 2025-06-02 00:32:02.512725 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-02 00:32:02.513324 | orchestrator | Monday 02 June 2025 00:32:02 +0000 (0:00:02.175) 0:06:30.382 *********** 2025-06-02 00:32:02.594848 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:32:02.594942 | orchestrator | 2025-06-02 00:32:02.596036 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-02 00:32:02.596680 | orchestrator | Monday 02 June 2025 00:32:02 +0000 (0:00:00.091) 0:06:30.474 *********** 2025-06-02 00:32:03.555902 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:03.556159 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:32:03.556667 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:32:03.557234 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:32:03.557865 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:32:03.558289 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:32:03.559699 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:32:03.560258 | orchestrator | 2025-06-02 00:32:03.560687 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-02 00:32:03.561360 | orchestrator | Monday 02 June 2025 00:32:03 +0000 (0:00:00.957) 0:06:31.432 *********** 2025-06-02 00:32:03.860421 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:32:03.925613 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:32:03.989766 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:32:04.058399 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:32:04.119443 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:32:04.243352 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:32:04.243551 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:32:04.244726 | orchestrator | 2025-06-02 00:32:04.246227 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-02 00:32:04.247428 | orchestrator | Monday 02 June 2025 00:32:04 +0000 (0:00:00.689) 0:06:32.121 *********** 2025-06-02 00:32:05.082647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:32:05.083590 | orchestrator | 2025-06-02 00:32:05.086515 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-02 00:32:05.087499 | orchestrator | Monday 02 June 2025 00:32:05 +0000 (0:00:00.838) 0:06:32.960 *********** 2025-06-02 00:32:05.473491 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:05.912287 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:05.912407 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:05.912423 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:05.912435 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:05.914165 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:05.914287 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:05.914305 | orchestrator | 2025-06-02 00:32:05.914317 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-02 00:32:05.914452 | orchestrator | Monday 02 June 2025 00:32:05 +0000 (0:00:00.829) 0:06:33.790 *********** 2025-06-02 00:32:08.495373 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-02 00:32:08.495644 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-02 00:32:08.498433 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-02 00:32:08.498487 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-02 00:32:08.499943 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-02 00:32:08.500817 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-02 00:32:08.501646 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-02 00:32:08.502939 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-02 00:32:08.503132 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-02 00:32:08.504464 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-02 00:32:08.505138 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-02 00:32:08.506639 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-02 00:32:08.506736 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-02 00:32:08.508190 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-02 00:32:08.508405 | orchestrator | 2025-06-02 00:32:08.510120 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-02 00:32:08.510437 | orchestrator | Monday 02 June 2025 00:32:08 +0000 (0:00:02.581) 0:06:36.371 *********** 2025-06-02 00:32:08.624631 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:32:08.687935 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:32:08.759593 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:32:08.816910 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:32:08.876066 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:32:08.959897 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:32:08.960136 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:32:08.961198 | orchestrator | 2025-06-02 00:32:08.964454 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-02 00:32:08.964555 | orchestrator | Monday 02 June 2025 00:32:08 +0000 (0:00:00.468) 0:06:36.839 *********** 2025-06-02 00:32:09.744809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:32:09.745610 | orchestrator | 2025-06-02 00:32:09.749355 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-02 00:32:09.749394 | orchestrator | Monday 02 June 2025 00:32:09 +0000 (0:00:00.781) 0:06:37.621 *********** 2025-06-02 00:32:10.281346 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:10.346631 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:10.776267 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:10.776507 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:10.777153 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:10.778418 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:10.778921 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:10.779694 | orchestrator | 2025-06-02 00:32:10.780173 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-02 00:32:10.781076 | orchestrator | Monday 02 June 2025 00:32:10 +0000 (0:00:01.031) 0:06:38.652 *********** 2025-06-02 00:32:11.183799 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:11.568498 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:11.568596 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:11.568922 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:11.569860 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:11.570506 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:11.572599 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:11.572630 | orchestrator | 2025-06-02 00:32:11.572814 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-02 00:32:11.573583 | orchestrator | Monday 02 June 2025 00:32:11 +0000 (0:00:00.792) 0:06:39.445 *********** 2025-06-02 00:32:11.710322 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:32:11.772667 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:32:11.839826 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:32:11.907853 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:32:11.980592 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:32:12.072495 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:32:12.072698 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:32:12.073166 | orchestrator | 2025-06-02 00:32:12.073846 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-02 00:32:12.074224 | orchestrator | Monday 02 June 2025 00:32:12 +0000 (0:00:00.506) 0:06:39.951 *********** 2025-06-02 00:32:13.476956 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:13.477110 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:13.477128 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:13.477269 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:13.477583 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:13.481589 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:13.484131 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:13.484164 | orchestrator | 2025-06-02 00:32:13.484177 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-02 00:32:13.484190 | orchestrator | Monday 02 June 2025 00:32:13 +0000 (0:00:01.401) 0:06:41.352 *********** 2025-06-02 00:32:13.602575 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:32:13.672797 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:32:13.735633 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:32:13.795477 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:32:13.859341 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:32:13.948732 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:32:13.949509 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:32:13.949935 | orchestrator | 2025-06-02 00:32:13.950812 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-02 00:32:13.951814 | orchestrator | Monday 02 June 2025 00:32:13 +0000 (0:00:00.474) 0:06:41.826 *********** 2025-06-02 00:32:21.360087 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:21.360227 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:32:21.360256 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:32:21.361179 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:32:21.364501 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:32:21.366278 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:32:21.366753 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:32:21.367586 | orchestrator | 2025-06-02 00:32:21.368540 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-02 00:32:21.369037 | orchestrator | Monday 02 June 2025 00:32:21 +0000 (0:00:07.407) 0:06:49.234 *********** 2025-06-02 00:32:22.654665 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:22.654767 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:32:22.656089 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:32:22.657914 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:32:22.659257 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:32:22.659560 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:32:22.661169 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:32:22.662136 | orchestrator | 2025-06-02 00:32:22.663376 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-02 00:32:22.664495 | orchestrator | Monday 02 June 2025 00:32:22 +0000 (0:00:01.298) 0:06:50.532 *********** 2025-06-02 00:32:24.290132 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:24.291075 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:32:24.291953 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:32:24.293313 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:32:24.294116 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:32:24.294486 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:32:24.295035 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:32:24.295476 | orchestrator | 2025-06-02 00:32:24.295936 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-02 00:32:24.296382 | orchestrator | Monday 02 June 2025 00:32:24 +0000 (0:00:01.634) 0:06:52.166 *********** 2025-06-02 00:32:25.872335 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:25.872548 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:32:25.875322 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:32:25.876346 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:32:25.876830 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:32:25.878320 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:32:25.879414 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:32:25.880038 | orchestrator | 2025-06-02 00:32:25.880949 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 00:32:25.881744 | orchestrator | Monday 02 June 2025 00:32:25 +0000 (0:00:01.582) 0:06:53.749 *********** 2025-06-02 00:32:26.347529 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:26.897072 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:26.899325 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:26.899523 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:26.900694 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:26.902341 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:26.903124 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:26.904257 | orchestrator | 2025-06-02 00:32:26.905064 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 00:32:26.905741 | orchestrator | Monday 02 June 2025 00:32:26 +0000 (0:00:01.026) 0:06:54.776 *********** 2025-06-02 00:32:27.016511 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:32:27.082743 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:32:27.144746 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:32:27.204079 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:32:27.272750 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:32:27.646404 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:32:27.646613 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:32:27.651145 | orchestrator | 2025-06-02 00:32:27.651305 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-02 00:32:27.651326 | orchestrator | Monday 02 June 2025 00:32:27 +0000 (0:00:00.747) 0:06:55.523 *********** 2025-06-02 00:32:27.775517 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:32:27.837879 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:32:27.905466 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:32:27.966977 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:32:28.027736 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:32:28.137626 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:32:28.138423 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:32:28.139749 | orchestrator | 2025-06-02 00:32:28.140372 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-02 00:32:28.141545 | orchestrator | Monday 02 June 2025 00:32:28 +0000 (0:00:00.493) 0:06:56.017 *********** 2025-06-02 00:32:28.261759 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:28.330763 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:28.395962 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:28.459834 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:28.684812 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:28.793399 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:28.793788 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:28.794960 | orchestrator | 2025-06-02 00:32:28.798347 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-02 00:32:28.798413 | orchestrator | Monday 02 June 2025 00:32:28 +0000 (0:00:00.653) 0:06:56.671 *********** 2025-06-02 00:32:28.925421 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:28.986940 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:29.050195 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:29.116101 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:29.177474 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:29.277274 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:29.278225 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:29.279380 | orchestrator | 2025-06-02 00:32:29.282523 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-02 00:32:29.282570 | orchestrator | Monday 02 June 2025 00:32:29 +0000 (0:00:00.484) 0:06:57.155 *********** 2025-06-02 00:32:29.408301 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:29.469825 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:29.538228 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:29.598387 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:29.661793 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:29.765353 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:29.766389 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:29.767220 | orchestrator | 2025-06-02 00:32:29.768553 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-02 00:32:29.771331 | orchestrator | Monday 02 June 2025 00:32:29 +0000 (0:00:00.490) 0:06:57.645 *********** 2025-06-02 00:32:35.303675 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:35.304807 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:35.305327 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:35.306157 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:35.307072 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:35.307492 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:35.307940 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:35.310835 | orchestrator | 2025-06-02 00:32:35.312186 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-02 00:32:35.313357 | orchestrator | Monday 02 June 2025 00:32:35 +0000 (0:00:05.536) 0:07:03.182 *********** 2025-06-02 00:32:35.443351 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:32:35.502893 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:32:35.565816 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:32:35.631853 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:32:35.688354 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:32:35.788120 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:32:35.788612 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:32:35.789079 | orchestrator | 2025-06-02 00:32:35.789835 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-02 00:32:35.790565 | orchestrator | Monday 02 June 2025 00:32:35 +0000 (0:00:00.484) 0:07:03.666 *********** 2025-06-02 00:32:36.751147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:32:36.751585 | orchestrator | 2025-06-02 00:32:36.752514 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-02 00:32:36.753141 | orchestrator | Monday 02 June 2025 00:32:36 +0000 (0:00:00.963) 0:07:04.629 *********** 2025-06-02 00:32:38.482573 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:38.483292 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:38.486699 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:38.488417 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:38.488762 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:38.490719 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:38.491439 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:38.492148 | orchestrator | 2025-06-02 00:32:38.494702 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-02 00:32:38.495127 | orchestrator | Monday 02 June 2025 00:32:38 +0000 (0:00:01.730) 0:07:06.359 *********** 2025-06-02 00:32:39.557447 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:39.557985 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:39.559545 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:39.560504 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:39.561345 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:39.562146 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:39.562880 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:39.563382 | orchestrator | 2025-06-02 00:32:39.563915 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-02 00:32:39.564502 | orchestrator | Monday 02 June 2025 00:32:39 +0000 (0:00:01.076) 0:07:07.436 *********** 2025-06-02 00:32:40.146563 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:40.560456 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:40.561603 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:40.564123 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:40.565091 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:40.565755 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:40.566923 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:40.568244 | orchestrator | 2025-06-02 00:32:40.570218 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-02 00:32:40.570569 | orchestrator | Monday 02 June 2025 00:32:40 +0000 (0:00:01.000) 0:07:08.436 *********** 2025-06-02 00:32:42.254948 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 00:32:42.257561 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 00:32:42.257594 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 00:32:42.257606 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 00:32:42.258566 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 00:32:42.259500 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 00:32:42.260796 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 00:32:42.261468 | orchestrator | 2025-06-02 00:32:42.262124 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-02 00:32:42.263061 | orchestrator | Monday 02 June 2025 00:32:42 +0000 (0:00:01.692) 0:07:10.129 *********** 2025-06-02 00:32:43.019298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:32:43.019403 | orchestrator | 2025-06-02 00:32:43.021149 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-02 00:32:43.021175 | orchestrator | Monday 02 June 2025 00:32:43 +0000 (0:00:00.764) 0:07:10.894 *********** 2025-06-02 00:32:51.998166 | orchestrator | changed: [testbed-manager] 2025-06-02 00:32:51.998412 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:32:51.999643 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:32:52.000309 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:32:52.000932 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:32:52.003611 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:32:52.004905 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:32:52.005719 | orchestrator | 2025-06-02 00:32:52.006296 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-02 00:32:52.007050 | orchestrator | Monday 02 June 2025 00:32:51 +0000 (0:00:08.980) 0:07:19.874 *********** 2025-06-02 00:32:53.640576 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:53.640735 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:53.644075 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:53.644653 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:53.647199 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:53.648582 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:53.649576 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:53.650511 | orchestrator | 2025-06-02 00:32:53.651295 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-02 00:32:53.652640 | orchestrator | Monday 02 June 2025 00:32:53 +0000 (0:00:01.640) 0:07:21.515 *********** 2025-06-02 00:32:54.914940 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:54.915721 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:54.917100 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:54.917133 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:54.918178 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:54.918818 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:54.919741 | orchestrator | 2025-06-02 00:32:54.920075 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-02 00:32:54.920835 | orchestrator | Monday 02 June 2025 00:32:54 +0000 (0:00:01.276) 0:07:22.791 *********** 2025-06-02 00:32:56.296113 | orchestrator | changed: [testbed-manager] 2025-06-02 00:32:56.296288 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:32:56.297072 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:32:56.297417 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:32:56.299520 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:32:56.300033 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:32:56.301673 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:32:56.302969 | orchestrator | 2025-06-02 00:32:56.303711 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-02 00:32:56.304818 | orchestrator | 2025-06-02 00:32:56.305705 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-02 00:32:56.306807 | orchestrator | Monday 02 June 2025 00:32:56 +0000 (0:00:01.381) 0:07:24.173 *********** 2025-06-02 00:32:56.418879 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:32:56.477436 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:32:56.535422 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:32:56.598820 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:32:56.659642 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:32:56.779937 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:32:56.781622 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:32:56.785246 | orchestrator | 2025-06-02 00:32:56.785290 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-02 00:32:56.785305 | orchestrator | 2025-06-02 00:32:56.785376 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-02 00:32:56.786589 | orchestrator | Monday 02 June 2025 00:32:56 +0000 (0:00:00.487) 0:07:24.660 *********** 2025-06-02 00:32:58.165528 | orchestrator | changed: [testbed-manager] 2025-06-02 00:32:58.166266 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:32:58.168163 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:32:58.168410 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:32:58.168740 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:32:58.169727 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:32:58.170370 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:32:58.171539 | orchestrator | 2025-06-02 00:32:58.172362 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-02 00:32:58.173317 | orchestrator | Monday 02 June 2025 00:32:58 +0000 (0:00:01.383) 0:07:26.044 *********** 2025-06-02 00:32:59.535443 | orchestrator | ok: [testbed-manager] 2025-06-02 00:32:59.535549 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:32:59.539712 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:32:59.540571 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:32:59.541658 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:32:59.542436 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:32:59.544876 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:32:59.546131 | orchestrator | 2025-06-02 00:32:59.546417 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-02 00:32:59.547406 | orchestrator | Monday 02 June 2025 00:32:59 +0000 (0:00:01.366) 0:07:27.410 *********** 2025-06-02 00:32:59.842270 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:32:59.904393 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:32:59.971912 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:33:00.031349 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:33:00.089008 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:33:00.460973 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:33:00.461332 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:33:00.462882 | orchestrator | 2025-06-02 00:33:00.463674 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-02 00:33:00.464611 | orchestrator | Monday 02 June 2025 00:33:00 +0000 (0:00:00.929) 0:07:28.340 *********** 2025-06-02 00:33:01.693465 | orchestrator | changed: [testbed-manager] 2025-06-02 00:33:01.694209 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:33:01.695618 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:33:01.697319 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:33:01.698111 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:33:01.698758 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:33:01.699717 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:33:01.700646 | orchestrator | 2025-06-02 00:33:01.701202 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-02 00:33:01.701562 | orchestrator | 2025-06-02 00:33:01.702185 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-02 00:33:01.702725 | orchestrator | Monday 02 June 2025 00:33:01 +0000 (0:00:01.230) 0:07:29.571 *********** 2025-06-02 00:33:02.603701 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:33:02.604400 | orchestrator | 2025-06-02 00:33:02.605323 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 00:33:02.606227 | orchestrator | Monday 02 June 2025 00:33:02 +0000 (0:00:00.907) 0:07:30.479 *********** 2025-06-02 00:33:03.394386 | orchestrator | ok: [testbed-manager] 2025-06-02 00:33:03.395155 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:33:03.396179 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:33:03.396905 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:33:03.397754 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:33:03.398441 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:33:03.399484 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:33:03.399996 | orchestrator | 2025-06-02 00:33:03.400666 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 00:33:03.401178 | orchestrator | Monday 02 June 2025 00:33:03 +0000 (0:00:00.790) 0:07:31.269 *********** 2025-06-02 00:33:04.527383 | orchestrator | changed: [testbed-manager] 2025-06-02 00:33:04.528855 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:33:04.530528 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:33:04.531759 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:33:04.535011 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:33:04.535081 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:33:04.535095 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:33:04.535107 | orchestrator | 2025-06-02 00:33:04.535119 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-02 00:33:04.535565 | orchestrator | Monday 02 June 2025 00:33:04 +0000 (0:00:01.129) 0:07:32.398 *********** 2025-06-02 00:33:05.511461 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:33:05.511856 | orchestrator | 2025-06-02 00:33:05.512932 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 00:33:05.514012 | orchestrator | Monday 02 June 2025 00:33:05 +0000 (0:00:00.987) 0:07:33.386 *********** 2025-06-02 00:33:06.340522 | orchestrator | ok: [testbed-manager] 2025-06-02 00:33:06.341299 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:33:06.343097 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:33:06.344009 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:33:06.344584 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:33:06.345487 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:33:06.346391 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:33:06.346864 | orchestrator | 2025-06-02 00:33:06.347497 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 00:33:06.348344 | orchestrator | Monday 02 June 2025 00:33:06 +0000 (0:00:00.830) 0:07:34.217 *********** 2025-06-02 00:33:06.750269 | orchestrator | changed: [testbed-manager] 2025-06-02 00:33:07.421067 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:33:07.421170 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:33:07.422205 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:33:07.422632 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:33:07.423287 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:33:07.424167 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:33:07.425017 | orchestrator | 2025-06-02 00:33:07.425468 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:33:07.425819 | orchestrator | 2025-06-02 00:33:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:33:07.425882 | orchestrator | 2025-06-02 00:33:07 | INFO  | Please wait and do not abort execution. 2025-06-02 00:33:07.426853 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-02 00:33:07.427693 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 00:33:07.428344 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 00:33:07.428656 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 00:33:07.429326 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-02 00:33:07.429969 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 00:33:07.430179 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 00:33:07.430700 | orchestrator | 2025-06-02 00:33:07.431085 | orchestrator | 2025-06-02 00:33:07.431607 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:33:07.432353 | orchestrator | Monday 02 June 2025 00:33:07 +0000 (0:00:01.083) 0:07:35.300 *********** 2025-06-02 00:33:07.432755 | orchestrator | =============================================================================== 2025-06-02 00:33:07.433139 | orchestrator | osism.commons.packages : Install required packages --------------------- 73.05s 2025-06-02 00:33:07.433875 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.18s 2025-06-02 00:33:07.434132 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.35s 2025-06-02 00:33:07.434868 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.51s 2025-06-02 00:33:07.434893 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.62s 2025-06-02 00:33:07.435155 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.69s 2025-06-02 00:33:07.435482 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.30s 2025-06-02 00:33:07.435875 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.41s 2025-06-02 00:33:07.436369 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.98s 2025-06-02 00:33:07.436586 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.45s 2025-06-02 00:33:07.436986 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.16s 2025-06-02 00:33:07.437519 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.04s 2025-06-02 00:33:07.437858 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.76s 2025-06-02 00:33:07.438142 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.57s 2025-06-02 00:33:07.438489 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.49s 2025-06-02 00:33:07.438838 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.41s 2025-06-02 00:33:07.439213 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.18s 2025-06-02 00:33:07.439739 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.76s 2025-06-02 00:33:07.440145 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.75s 2025-06-02 00:33:07.440409 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.57s 2025-06-02 00:33:08.073206 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 00:33:08.073353 | orchestrator | + osism apply network 2025-06-02 00:33:10.055319 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:33:10.055569 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:33:10.055611 | orchestrator | Registering Redlock._release_script 2025-06-02 00:33:10.118922 | orchestrator | 2025-06-02 00:33:10 | INFO  | Task 18a1d363-3c0d-41ef-9040-50de39ddcec4 (network) was prepared for execution. 2025-06-02 00:33:10.119120 | orchestrator | 2025-06-02 00:33:10 | INFO  | It takes a moment until task 18a1d363-3c0d-41ef-9040-50de39ddcec4 (network) has been started and output is visible here. 2025-06-02 00:33:14.216149 | orchestrator | 2025-06-02 00:33:14.216400 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-02 00:33:14.218562 | orchestrator | 2025-06-02 00:33:14.220248 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-02 00:33:14.221172 | orchestrator | Monday 02 June 2025 00:33:14 +0000 (0:00:00.259) 0:00:00.259 *********** 2025-06-02 00:33:14.357316 | orchestrator | ok: [testbed-manager] 2025-06-02 00:33:14.444224 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:33:14.518715 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:33:14.592220 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:33:14.770320 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:33:14.897711 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:33:14.898528 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:33:14.898559 | orchestrator | 2025-06-02 00:33:14.899451 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-02 00:33:14.903632 | orchestrator | Monday 02 June 2025 00:33:14 +0000 (0:00:00.684) 0:00:00.943 *********** 2025-06-02 00:33:16.052271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:33:16.052485 | orchestrator | 2025-06-02 00:33:16.053589 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-02 00:33:16.054215 | orchestrator | Monday 02 June 2025 00:33:16 +0000 (0:00:01.153) 0:00:02.097 *********** 2025-06-02 00:33:17.877618 | orchestrator | ok: [testbed-manager] 2025-06-02 00:33:17.877757 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:33:17.877784 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:33:17.880614 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:33:17.881739 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:33:17.882560 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:33:17.883012 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:33:17.885002 | orchestrator | 2025-06-02 00:33:17.885083 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-02 00:33:17.885797 | orchestrator | Monday 02 June 2025 00:33:17 +0000 (0:00:01.823) 0:00:03.920 *********** 2025-06-02 00:33:19.515022 | orchestrator | ok: [testbed-manager] 2025-06-02 00:33:19.516332 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:33:19.517225 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:33:19.518600 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:33:19.520197 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:33:19.520241 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:33:19.520552 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:33:19.523177 | orchestrator | 2025-06-02 00:33:19.525080 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-02 00:33:19.525175 | orchestrator | Monday 02 June 2025 00:33:19 +0000 (0:00:01.634) 0:00:05.555 *********** 2025-06-02 00:33:20.049524 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-02 00:33:20.050082 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-02 00:33:20.502331 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-02 00:33:20.504125 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-02 00:33:20.504154 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-02 00:33:20.504520 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-02 00:33:20.505498 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-02 00:33:20.506279 | orchestrator | 2025-06-02 00:33:20.506900 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-02 00:33:20.507416 | orchestrator | Monday 02 June 2025 00:33:20 +0000 (0:00:00.993) 0:00:06.549 *********** 2025-06-02 00:33:23.643541 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 00:33:23.644335 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 00:33:23.647003 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 00:33:23.648353 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 00:33:23.648767 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:33:23.650112 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 00:33:23.651171 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 00:33:23.653060 | orchestrator | 2025-06-02 00:33:23.653959 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-02 00:33:23.654953 | orchestrator | Monday 02 June 2025 00:33:23 +0000 (0:00:03.137) 0:00:09.686 *********** 2025-06-02 00:33:25.038672 | orchestrator | changed: [testbed-manager] 2025-06-02 00:33:25.039410 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:33:25.044005 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:33:25.045204 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:33:25.046534 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:33:25.047420 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:33:25.048511 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:33:25.050234 | orchestrator | 2025-06-02 00:33:25.050961 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-02 00:33:25.052071 | orchestrator | Monday 02 June 2025 00:33:25 +0000 (0:00:01.398) 0:00:11.085 *********** 2025-06-02 00:33:26.894511 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:33:26.895340 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 00:33:26.899463 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 00:33:26.900075 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 00:33:26.900917 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 00:33:26.901780 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 00:33:26.903334 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 00:33:26.903946 | orchestrator | 2025-06-02 00:33:26.904855 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-02 00:33:26.907826 | orchestrator | Monday 02 June 2025 00:33:26 +0000 (0:00:01.856) 0:00:12.941 *********** 2025-06-02 00:33:27.298742 | orchestrator | ok: [testbed-manager] 2025-06-02 00:33:27.564160 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:33:27.967595 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:33:27.968326 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:33:27.970202 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:33:27.970699 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:33:27.971858 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:33:27.972617 | orchestrator | 2025-06-02 00:33:27.973465 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-02 00:33:27.974206 | orchestrator | Monday 02 June 2025 00:33:27 +0000 (0:00:01.068) 0:00:14.010 *********** 2025-06-02 00:33:28.122547 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:33:28.201248 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:33:28.280600 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:33:28.355461 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:33:28.432205 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:33:28.565226 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:33:28.565307 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:33:28.565703 | orchestrator | 2025-06-02 00:33:28.566543 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-02 00:33:28.567245 | orchestrator | Monday 02 June 2025 00:33:28 +0000 (0:00:00.603) 0:00:14.613 *********** 2025-06-02 00:33:30.703561 | orchestrator | ok: [testbed-manager] 2025-06-02 00:33:30.703672 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:33:30.704833 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:33:30.705757 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:33:30.706468 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:33:30.707636 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:33:30.708028 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:33:30.709402 | orchestrator | 2025-06-02 00:33:30.710495 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-02 00:33:30.710783 | orchestrator | Monday 02 June 2025 00:33:30 +0000 (0:00:02.129) 0:00:16.743 *********** 2025-06-02 00:33:30.980382 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:33:31.061210 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:33:31.140991 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:33:31.218938 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:33:31.581775 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:33:31.582163 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:33:31.583686 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-02 00:33:31.584182 | orchestrator | 2025-06-02 00:33:31.585544 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-02 00:33:31.585865 | orchestrator | Monday 02 June 2025 00:33:31 +0000 (0:00:00.886) 0:00:17.630 *********** 2025-06-02 00:33:33.190178 | orchestrator | ok: [testbed-manager] 2025-06-02 00:33:33.193526 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:33:33.194734 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:33:33.196166 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:33:33.197325 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:33:33.198189 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:33:33.198974 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:33:33.202280 | orchestrator | 2025-06-02 00:33:33.203161 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-02 00:33:33.203620 | orchestrator | Monday 02 June 2025 00:33:33 +0000 (0:00:01.603) 0:00:19.233 *********** 2025-06-02 00:33:34.352151 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:33:34.355075 | orchestrator | 2025-06-02 00:33:34.355906 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 00:33:34.356699 | orchestrator | Monday 02 June 2025 00:33:34 +0000 (0:00:01.161) 0:00:20.394 *********** 2025-06-02 00:33:34.891805 | orchestrator | ok: [testbed-manager] 2025-06-02 00:33:35.450690 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:33:35.451388 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:33:35.455339 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:33:35.456027 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:33:35.456941 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:33:35.457723 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:33:35.458577 | orchestrator | 2025-06-02 00:33:35.459387 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-02 00:33:35.460209 | orchestrator | Monday 02 June 2025 00:33:35 +0000 (0:00:01.100) 0:00:21.495 *********** 2025-06-02 00:33:35.616893 | orchestrator | ok: [testbed-manager] 2025-06-02 00:33:35.701595 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:33:35.785929 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:33:35.869158 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:33:35.948996 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:33:36.069756 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:33:36.070754 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:33:36.074199 | orchestrator | 2025-06-02 00:33:36.074254 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 00:33:36.074276 | orchestrator | Monday 02 June 2025 00:33:36 +0000 (0:00:00.619) 0:00:22.114 *********** 2025-06-02 00:33:36.471036 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 00:33:36.471167 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 00:33:36.760140 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 00:33:36.761178 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 00:33:36.761816 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 00:33:36.762465 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 00:33:36.765791 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 00:33:36.765902 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 00:33:36.765919 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 00:33:36.765930 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 00:33:37.206783 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 00:33:37.210256 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 00:33:37.210301 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 00:33:37.210316 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 00:33:37.210727 | orchestrator | 2025-06-02 00:33:37.212166 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-02 00:33:37.213304 | orchestrator | Monday 02 June 2025 00:33:37 +0000 (0:00:01.136) 0:00:23.250 *********** 2025-06-02 00:33:37.362926 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:33:37.444411 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:33:37.524455 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:33:37.602681 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:33:37.689251 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:33:37.797864 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:33:37.798097 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:33:37.799439 | orchestrator | 2025-06-02 00:33:37.803494 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-02 00:33:37.803578 | orchestrator | Monday 02 June 2025 00:33:37 +0000 (0:00:00.594) 0:00:23.845 *********** 2025-06-02 00:33:41.284343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2025-06-02 00:33:41.284459 | orchestrator | 2025-06-02 00:33:41.284765 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-02 00:33:41.285370 | orchestrator | Monday 02 June 2025 00:33:41 +0000 (0:00:03.481) 0:00:27.327 *********** 2025-06-02 00:33:45.704243 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:45.704781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:45.706513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:45.708288 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:45.710251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:45.711509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:45.714325 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:45.715542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:45.716155 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:45.716786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:45.717588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:45.718236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:45.719320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:45.719900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:45.720783 | orchestrator | 2025-06-02 00:33:45.721904 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-02 00:33:45.722806 | orchestrator | Monday 02 June 2025 00:33:45 +0000 (0:00:04.419) 0:00:31.746 *********** 2025-06-02 00:33:49.880887 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:49.884575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:49.884647 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:49.884663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:49.884677 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:49.885878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:49.886789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:49.888913 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 00:33:49.890529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:49.890557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:49.891501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:49.892178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:49.893136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:49.893854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 00:33:49.894480 | orchestrator | 2025-06-02 00:33:49.894928 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-02 00:33:49.895481 | orchestrator | Monday 02 June 2025 00:33:49 +0000 (0:00:04.181) 0:00:35.928 *********** 2025-06-02 00:33:50.982320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:33:50.983263 | orchestrator | 2025-06-02 00:33:50.986937 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 00:33:50.987002 | orchestrator | Monday 02 June 2025 00:33:50 +0000 (0:00:01.098) 0:00:37.026 *********** 2025-06-02 00:33:51.412297 | orchestrator | ok: [testbed-manager] 2025-06-02 00:33:51.670211 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:33:52.089705 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:33:52.090209 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:33:52.093536 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:33:52.094352 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:33:52.095797 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:33:52.097922 | orchestrator | 2025-06-02 00:33:52.100152 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 00:33:52.101184 | orchestrator | Monday 02 June 2025 00:33:52 +0000 (0:00:01.110) 0:00:38.137 *********** 2025-06-02 00:33:52.193247 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 00:33:52.194394 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 00:33:52.195780 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 00:33:52.196243 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 00:33:52.293476 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:33:52.294209 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 00:33:52.295136 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 00:33:52.296298 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 00:33:52.297275 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 00:33:52.381233 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:33:52.382826 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 00:33:52.383710 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 00:33:52.388714 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 00:33:52.390151 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 00:33:52.473108 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 00:33:52.474442 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 00:33:52.476151 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 00:33:52.477519 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 00:33:52.564813 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:33:52.565324 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 00:33:52.566993 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 00:33:52.568106 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 00:33:52.569289 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 00:33:52.806211 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:33:52.806944 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 00:33:52.808969 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 00:33:52.810177 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 00:33:52.811548 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 00:33:54.023347 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:33:54.023450 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:33:54.023958 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 00:33:54.025498 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 00:33:54.026285 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 00:33:54.026920 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 00:33:54.029868 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:33:54.029903 | orchestrator | 2025-06-02 00:33:54.029916 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-02 00:33:54.029928 | orchestrator | Monday 02 June 2025 00:33:54 +0000 (0:00:01.929) 0:00:40.067 *********** 2025-06-02 00:33:54.189679 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:33:54.273392 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:33:54.352717 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:33:54.432773 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:33:54.512025 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:33:54.623318 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:33:54.623684 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:33:54.627962 | orchestrator | 2025-06-02 00:33:54.628011 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-02 00:33:54.628025 | orchestrator | Monday 02 June 2025 00:33:54 +0000 (0:00:00.603) 0:00:40.670 *********** 2025-06-02 00:33:54.774839 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:33:55.012502 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:33:55.091724 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:33:55.171720 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:33:55.249406 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:33:55.282243 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:33:55.282655 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:33:55.283085 | orchestrator | 2025-06-02 00:33:55.283584 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:33:55.284004 | orchestrator | 2025-06-02 00:33:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:33:55.284104 | orchestrator | 2025-06-02 00:33:55 | INFO  | Please wait and do not abort execution. 2025-06-02 00:33:55.285145 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 00:33:55.285667 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 00:33:55.286284 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 00:33:55.286634 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 00:33:55.287447 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 00:33:55.288111 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 00:33:55.288588 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 00:33:55.289948 | orchestrator | 2025-06-02 00:33:55.291362 | orchestrator | 2025-06-02 00:33:55.291799 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:33:55.292964 | orchestrator | Monday 02 June 2025 00:33:55 +0000 (0:00:00.658) 0:00:41.329 *********** 2025-06-02 00:33:55.293219 | orchestrator | =============================================================================== 2025-06-02 00:33:55.293526 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.42s 2025-06-02 00:33:55.294114 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.18s 2025-06-02 00:33:55.294249 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.48s 2025-06-02 00:33:55.295020 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.14s 2025-06-02 00:33:55.295544 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.13s 2025-06-02 00:33:55.295770 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.93s 2025-06-02 00:33:55.295854 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.86s 2025-06-02 00:33:55.296207 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.82s 2025-06-02 00:33:55.296603 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.63s 2025-06-02 00:33:55.296950 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.60s 2025-06-02 00:33:55.297263 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.40s 2025-06-02 00:33:55.297617 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.16s 2025-06-02 00:33:55.297937 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.15s 2025-06-02 00:33:55.298286 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.14s 2025-06-02 00:33:55.298603 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.11s 2025-06-02 00:33:55.298965 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.10s 2025-06-02 00:33:55.299316 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.10s 2025-06-02 00:33:55.299593 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.07s 2025-06-02 00:33:55.299915 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2025-06-02 00:33:55.300230 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.89s 2025-06-02 00:33:55.840052 | orchestrator | + osism apply wireguard 2025-06-02 00:33:57.453982 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:33:57.454213 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:33:57.454245 | orchestrator | Registering Redlock._release_script 2025-06-02 00:33:57.510505 | orchestrator | 2025-06-02 00:33:57 | INFO  | Task 889816c6-202a-471b-a2aa-627e59a65e09 (wireguard) was prepared for execution. 2025-06-02 00:33:57.510577 | orchestrator | 2025-06-02 00:33:57 | INFO  | It takes a moment until task 889816c6-202a-471b-a2aa-627e59a65e09 (wireguard) has been started and output is visible here. 2025-06-02 00:34:00.953883 | orchestrator | 2025-06-02 00:34:00.954125 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-02 00:34:00.954515 | orchestrator | 2025-06-02 00:34:00.954916 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-02 00:34:00.955701 | orchestrator | Monday 02 June 2025 00:34:00 +0000 (0:00:00.164) 0:00:00.164 *********** 2025-06-02 00:34:01.959431 | orchestrator | ok: [testbed-manager] 2025-06-02 00:34:01.960581 | orchestrator | 2025-06-02 00:34:01.961207 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-02 00:34:01.963017 | orchestrator | Monday 02 June 2025 00:34:01 +0000 (0:00:01.007) 0:00:01.171 *********** 2025-06-02 00:34:07.357953 | orchestrator | changed: [testbed-manager] 2025-06-02 00:34:07.358683 | orchestrator | 2025-06-02 00:34:07.359844 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-02 00:34:07.361674 | orchestrator | Monday 02 June 2025 00:34:07 +0000 (0:00:05.395) 0:00:06.567 *********** 2025-06-02 00:34:07.892388 | orchestrator | changed: [testbed-manager] 2025-06-02 00:34:07.892628 | orchestrator | 2025-06-02 00:34:07.893136 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-02 00:34:07.893815 | orchestrator | Monday 02 June 2025 00:34:07 +0000 (0:00:00.537) 0:00:07.104 *********** 2025-06-02 00:34:08.294656 | orchestrator | changed: [testbed-manager] 2025-06-02 00:34:08.295098 | orchestrator | 2025-06-02 00:34:08.296710 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-02 00:34:08.297240 | orchestrator | Monday 02 June 2025 00:34:08 +0000 (0:00:00.401) 0:00:07.506 *********** 2025-06-02 00:34:08.808375 | orchestrator | ok: [testbed-manager] 2025-06-02 00:34:08.809011 | orchestrator | 2025-06-02 00:34:08.809624 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-02 00:34:08.810524 | orchestrator | Monday 02 June 2025 00:34:08 +0000 (0:00:00.512) 0:00:08.019 *********** 2025-06-02 00:34:09.307923 | orchestrator | ok: [testbed-manager] 2025-06-02 00:34:09.308839 | orchestrator | 2025-06-02 00:34:09.309928 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-02 00:34:09.310547 | orchestrator | Monday 02 June 2025 00:34:09 +0000 (0:00:00.501) 0:00:08.520 *********** 2025-06-02 00:34:09.736199 | orchestrator | ok: [testbed-manager] 2025-06-02 00:34:09.736766 | orchestrator | 2025-06-02 00:34:09.737736 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-02 00:34:09.738893 | orchestrator | Monday 02 June 2025 00:34:09 +0000 (0:00:00.426) 0:00:08.947 *********** 2025-06-02 00:34:10.900062 | orchestrator | changed: [testbed-manager] 2025-06-02 00:34:10.900661 | orchestrator | 2025-06-02 00:34:10.901533 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-02 00:34:10.902224 | orchestrator | Monday 02 June 2025 00:34:10 +0000 (0:00:01.163) 0:00:10.110 *********** 2025-06-02 00:34:11.779604 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 00:34:11.779711 | orchestrator | changed: [testbed-manager] 2025-06-02 00:34:11.780474 | orchestrator | 2025-06-02 00:34:11.781214 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-02 00:34:11.781842 | orchestrator | Monday 02 June 2025 00:34:11 +0000 (0:00:00.878) 0:00:10.989 *********** 2025-06-02 00:34:13.367585 | orchestrator | changed: [testbed-manager] 2025-06-02 00:34:13.368293 | orchestrator | 2025-06-02 00:34:13.368809 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-02 00:34:13.369566 | orchestrator | Monday 02 June 2025 00:34:13 +0000 (0:00:01.588) 0:00:12.578 *********** 2025-06-02 00:34:14.267588 | orchestrator | changed: [testbed-manager] 2025-06-02 00:34:14.269975 | orchestrator | 2025-06-02 00:34:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:34:14.270114 | orchestrator | 2025-06-02 00:34:14 | INFO  | Please wait and do not abort execution. 2025-06-02 00:34:14.270198 | orchestrator | 2025-06-02 00:34:14.270214 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:34:14.271824 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:34:14.274048 | orchestrator | 2025-06-02 00:34:14.274569 | orchestrator | 2025-06-02 00:34:14.274691 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:34:14.275181 | orchestrator | Monday 02 June 2025 00:34:14 +0000 (0:00:00.899) 0:00:13.477 *********** 2025-06-02 00:34:14.275257 | orchestrator | =============================================================================== 2025-06-02 00:34:14.276049 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.40s 2025-06-02 00:34:14.276421 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.59s 2025-06-02 00:34:14.277278 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.16s 2025-06-02 00:34:14.277456 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.01s 2025-06-02 00:34:14.277706 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2025-06-02 00:34:14.277942 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.88s 2025-06-02 00:34:14.278238 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-06-02 00:34:14.278369 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2025-06-02 00:34:14.278652 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.50s 2025-06-02 00:34:14.278912 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2025-06-02 00:34:14.279132 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2025-06-02 00:34:14.787669 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-02 00:34:14.826130 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-02 00:34:14.826228 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-02 00:34:14.897978 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 207 0 --:--:-- --:--:-- --:--:-- 211 2025-06-02 00:34:14.914898 | orchestrator | + osism apply --environment custom workarounds 2025-06-02 00:34:16.557669 | orchestrator | 2025-06-02 00:34:16 | INFO  | Trying to run play workarounds in environment custom 2025-06-02 00:34:16.574186 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:34:16.577878 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:34:16.577910 | orchestrator | Registering Redlock._release_script 2025-06-02 00:34:16.619493 | orchestrator | 2025-06-02 00:34:16 | INFO  | Task f1d4930a-7aae-4353-933c-d7a93055c91e (workarounds) was prepared for execution. 2025-06-02 00:34:16.619636 | orchestrator | 2025-06-02 00:34:16 | INFO  | It takes a moment until task f1d4930a-7aae-4353-933c-d7a93055c91e (workarounds) has been started and output is visible here. 2025-06-02 00:34:20.373397 | orchestrator | 2025-06-02 00:34:20.376978 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:34:20.377020 | orchestrator | 2025-06-02 00:34:20.377032 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-02 00:34:20.377395 | orchestrator | Monday 02 June 2025 00:34:20 +0000 (0:00:00.139) 0:00:00.139 *********** 2025-06-02 00:34:20.531037 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-02 00:34:20.612723 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-02 00:34:20.694651 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-02 00:34:20.775475 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-02 00:34:20.947990 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-02 00:34:21.084624 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-02 00:34:21.085327 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-02 00:34:21.086304 | orchestrator | 2025-06-02 00:34:21.087899 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-02 00:34:21.089353 | orchestrator | 2025-06-02 00:34:21.089732 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 00:34:21.090171 | orchestrator | Monday 02 June 2025 00:34:21 +0000 (0:00:00.714) 0:00:00.853 *********** 2025-06-02 00:34:23.339476 | orchestrator | ok: [testbed-manager] 2025-06-02 00:34:23.339582 | orchestrator | 2025-06-02 00:34:23.340617 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-02 00:34:23.341956 | orchestrator | 2025-06-02 00:34:23.344852 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 00:34:23.344894 | orchestrator | Monday 02 June 2025 00:34:23 +0000 (0:00:02.249) 0:00:03.102 *********** 2025-06-02 00:34:25.098869 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:34:25.099815 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:34:25.100778 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:34:25.103755 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:34:25.103897 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:34:25.104241 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:34:25.107770 | orchestrator | 2025-06-02 00:34:25.107798 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-02 00:34:25.107812 | orchestrator | 2025-06-02 00:34:25.108216 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-02 00:34:25.109226 | orchestrator | Monday 02 June 2025 00:34:25 +0000 (0:00:01.762) 0:00:04.865 *********** 2025-06-02 00:34:26.562594 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 00:34:26.563977 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 00:34:26.564746 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 00:34:26.566887 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 00:34:26.567681 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 00:34:26.568450 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 00:34:26.569618 | orchestrator | 2025-06-02 00:34:26.570417 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-02 00:34:26.571074 | orchestrator | Monday 02 June 2025 00:34:26 +0000 (0:00:01.460) 0:00:06.325 *********** 2025-06-02 00:34:30.319594 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:34:30.319814 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:34:30.320617 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:34:30.322212 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:34:30.323105 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:34:30.323847 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:34:30.324691 | orchestrator | 2025-06-02 00:34:30.325370 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-02 00:34:30.326385 | orchestrator | Monday 02 June 2025 00:34:30 +0000 (0:00:03.761) 0:00:10.086 *********** 2025-06-02 00:34:30.462607 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:34:30.530898 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:34:30.604241 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:34:30.679882 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:34:30.956484 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:34:30.956585 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:34:30.956909 | orchestrator | 2025-06-02 00:34:30.957505 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-02 00:34:30.958377 | orchestrator | 2025-06-02 00:34:30.958979 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-02 00:34:30.959716 | orchestrator | Monday 02 June 2025 00:34:30 +0000 (0:00:00.636) 0:00:10.723 *********** 2025-06-02 00:34:32.600852 | orchestrator | changed: [testbed-manager] 2025-06-02 00:34:32.601693 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:34:32.605045 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:34:32.605067 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:34:32.605938 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:34:32.606553 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:34:32.608513 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:34:32.608778 | orchestrator | 2025-06-02 00:34:32.610059 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-02 00:34:32.610736 | orchestrator | Monday 02 June 2025 00:34:32 +0000 (0:00:01.644) 0:00:12.367 *********** 2025-06-02 00:34:34.196812 | orchestrator | changed: [testbed-manager] 2025-06-02 00:34:34.197010 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:34:34.198708 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:34:34.199684 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:34:34.200915 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:34:34.202391 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:34:34.203506 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:34:34.204953 | orchestrator | 2025-06-02 00:34:34.205867 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-02 00:34:34.206891 | orchestrator | Monday 02 June 2025 00:34:34 +0000 (0:00:01.581) 0:00:13.949 *********** 2025-06-02 00:34:35.774571 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:34:35.775400 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:34:35.776189 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:34:35.778872 | orchestrator | ok: [testbed-manager] 2025-06-02 00:34:35.778899 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:34:35.780588 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:34:35.781903 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:34:35.782461 | orchestrator | 2025-06-02 00:34:35.782940 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-02 00:34:35.783539 | orchestrator | Monday 02 June 2025 00:34:35 +0000 (0:00:01.590) 0:00:15.540 *********** 2025-06-02 00:34:37.470147 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:34:37.470330 | orchestrator | changed: [testbed-manager] 2025-06-02 00:34:37.471012 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:34:37.474156 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:34:37.474874 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:34:37.475767 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:34:37.476632 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:34:37.477661 | orchestrator | 2025-06-02 00:34:37.478472 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-02 00:34:37.479312 | orchestrator | Monday 02 June 2025 00:34:37 +0000 (0:00:01.693) 0:00:17.233 *********** 2025-06-02 00:34:37.630325 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:34:37.706869 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:34:37.785834 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:34:37.864541 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:34:37.939859 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:34:38.074162 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:34:38.075020 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:34:38.076591 | orchestrator | 2025-06-02 00:34:38.077598 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-02 00:34:38.078623 | orchestrator | 2025-06-02 00:34:38.079516 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-02 00:34:38.080577 | orchestrator | Monday 02 June 2025 00:34:38 +0000 (0:00:00.606) 0:00:17.840 *********** 2025-06-02 00:34:40.738246 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:34:40.740694 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:34:40.741340 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:34:40.743411 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:34:40.744168 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:34:40.744722 | orchestrator | ok: [testbed-manager] 2025-06-02 00:34:40.745471 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:34:40.746447 | orchestrator | 2025-06-02 00:34:40.746738 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:34:40.747247 | orchestrator | 2025-06-02 00:34:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:34:40.747413 | orchestrator | 2025-06-02 00:34:40 | INFO  | Please wait and do not abort execution. 2025-06-02 00:34:40.747992 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:34:40.748975 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:34:40.749768 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:34:40.750613 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:34:40.751069 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:34:40.751707 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:34:40.752593 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:34:40.754192 | orchestrator | 2025-06-02 00:34:40.755218 | orchestrator | 2025-06-02 00:34:40.756496 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:34:40.757264 | orchestrator | Monday 02 June 2025 00:34:40 +0000 (0:00:02.663) 0:00:20.504 *********** 2025-06-02 00:34:40.758337 | orchestrator | =============================================================================== 2025-06-02 00:34:40.759165 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.76s 2025-06-02 00:34:40.760038 | orchestrator | Install python3-docker -------------------------------------------------- 2.66s 2025-06-02 00:34:40.760907 | orchestrator | Apply netplan configuration --------------------------------------------- 2.25s 2025-06-02 00:34:40.761593 | orchestrator | Apply netplan configuration --------------------------------------------- 1.76s 2025-06-02 00:34:40.762473 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.69s 2025-06-02 00:34:40.763287 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2025-06-02 00:34:40.763935 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.59s 2025-06-02 00:34:40.764640 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.58s 2025-06-02 00:34:40.765084 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2025-06-02 00:34:40.765713 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.71s 2025-06-02 00:34:40.766120 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.64s 2025-06-02 00:34:40.768746 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2025-06-02 00:34:41.275252 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-02 00:34:43.261466 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:34:43.261571 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:34:43.261586 | orchestrator | Registering Redlock._release_script 2025-06-02 00:34:43.317669 | orchestrator | 2025-06-02 00:34:43 | INFO  | Task 3e5f4570-0cfb-42a4-9370-eb54bf13bb67 (reboot) was prepared for execution. 2025-06-02 00:34:43.317727 | orchestrator | 2025-06-02 00:34:43 | INFO  | It takes a moment until task 3e5f4570-0cfb-42a4-9370-eb54bf13bb67 (reboot) has been started and output is visible here. 2025-06-02 00:34:47.182176 | orchestrator | 2025-06-02 00:34:47.182240 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 00:34:47.182248 | orchestrator | 2025-06-02 00:34:47.183376 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 00:34:47.183487 | orchestrator | Monday 02 June 2025 00:34:47 +0000 (0:00:00.180) 0:00:00.180 *********** 2025-06-02 00:34:47.247457 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:34:47.247952 | orchestrator | 2025-06-02 00:34:47.248209 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 00:34:47.249015 | orchestrator | Monday 02 June 2025 00:34:47 +0000 (0:00:00.067) 0:00:00.247 *********** 2025-06-02 00:34:48.093472 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:34:48.094087 | orchestrator | 2025-06-02 00:34:48.094751 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 00:34:48.095696 | orchestrator | Monday 02 June 2025 00:34:48 +0000 (0:00:00.843) 0:00:01.091 *********** 2025-06-02 00:34:48.197016 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:34:48.197174 | orchestrator | 2025-06-02 00:34:48.198199 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 00:34:48.198841 | orchestrator | 2025-06-02 00:34:48.199496 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 00:34:48.200112 | orchestrator | Monday 02 June 2025 00:34:48 +0000 (0:00:00.103) 0:00:01.194 *********** 2025-06-02 00:34:48.274155 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:34:48.274672 | orchestrator | 2025-06-02 00:34:48.275905 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 00:34:48.276107 | orchestrator | Monday 02 June 2025 00:34:48 +0000 (0:00:00.079) 0:00:01.274 *********** 2025-06-02 00:34:48.909668 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:34:48.909832 | orchestrator | 2025-06-02 00:34:48.911002 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 00:34:48.911367 | orchestrator | Monday 02 June 2025 00:34:48 +0000 (0:00:00.634) 0:00:01.908 *********** 2025-06-02 00:34:49.007571 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:34:49.008756 | orchestrator | 2025-06-02 00:34:49.009398 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 00:34:49.010146 | orchestrator | 2025-06-02 00:34:49.010765 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 00:34:49.011349 | orchestrator | Monday 02 June 2025 00:34:49 +0000 (0:00:00.097) 0:00:02.006 *********** 2025-06-02 00:34:49.161550 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:34:49.161971 | orchestrator | 2025-06-02 00:34:49.163078 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 00:34:49.163472 | orchestrator | Monday 02 June 2025 00:34:49 +0000 (0:00:00.155) 0:00:02.161 *********** 2025-06-02 00:34:49.795847 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:34:49.796473 | orchestrator | 2025-06-02 00:34:49.797974 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 00:34:49.798891 | orchestrator | Monday 02 June 2025 00:34:49 +0000 (0:00:00.633) 0:00:02.795 *********** 2025-06-02 00:34:49.909159 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:34:49.910125 | orchestrator | 2025-06-02 00:34:49.910591 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 00:34:49.911527 | orchestrator | 2025-06-02 00:34:49.912239 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 00:34:49.912714 | orchestrator | Monday 02 June 2025 00:34:49 +0000 (0:00:00.111) 0:00:02.906 *********** 2025-06-02 00:34:50.011148 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:34:50.012429 | orchestrator | 2025-06-02 00:34:50.013144 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 00:34:50.013955 | orchestrator | Monday 02 June 2025 00:34:50 +0000 (0:00:00.104) 0:00:03.010 *********** 2025-06-02 00:34:50.645488 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:34:50.645616 | orchestrator | 2025-06-02 00:34:50.646615 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 00:34:50.647704 | orchestrator | Monday 02 June 2025 00:34:50 +0000 (0:00:00.631) 0:00:03.642 *********** 2025-06-02 00:34:50.747585 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:34:50.748187 | orchestrator | 2025-06-02 00:34:50.748987 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 00:34:50.749957 | orchestrator | 2025-06-02 00:34:50.750840 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 00:34:50.751618 | orchestrator | Monday 02 June 2025 00:34:50 +0000 (0:00:00.103) 0:00:03.745 *********** 2025-06-02 00:34:50.837355 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:34:50.837833 | orchestrator | 2025-06-02 00:34:50.838836 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 00:34:50.839863 | orchestrator | Monday 02 June 2025 00:34:50 +0000 (0:00:00.091) 0:00:03.837 *********** 2025-06-02 00:34:51.491466 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:34:51.491666 | orchestrator | 2025-06-02 00:34:51.492618 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 00:34:51.493474 | orchestrator | Monday 02 June 2025 00:34:51 +0000 (0:00:00.652) 0:00:04.489 *********** 2025-06-02 00:34:51.590526 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:34:51.591944 | orchestrator | 2025-06-02 00:34:51.592484 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 00:34:51.593008 | orchestrator | 2025-06-02 00:34:51.593691 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 00:34:51.594266 | orchestrator | Monday 02 June 2025 00:34:51 +0000 (0:00:00.101) 0:00:04.590 *********** 2025-06-02 00:34:51.679531 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:34:51.680545 | orchestrator | 2025-06-02 00:34:51.682233 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 00:34:51.683510 | orchestrator | Monday 02 June 2025 00:34:51 +0000 (0:00:00.088) 0:00:04.678 *********** 2025-06-02 00:34:52.322716 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:34:52.324766 | orchestrator | 2025-06-02 00:34:52.324860 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 00:34:52.325782 | orchestrator | Monday 02 June 2025 00:34:52 +0000 (0:00:00.641) 0:00:05.320 *********** 2025-06-02 00:34:52.355948 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:34:52.356562 | orchestrator | 2025-06-02 00:34:52.357651 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:34:52.358209 | orchestrator | 2025-06-02 00:34:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:34:52.358295 | orchestrator | 2025-06-02 00:34:52 | INFO  | Please wait and do not abort execution. 2025-06-02 00:34:52.359317 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:34:52.360059 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:34:52.360751 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:34:52.361379 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:34:52.361915 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:34:52.362602 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:34:52.363444 | orchestrator | 2025-06-02 00:34:52.363928 | orchestrator | 2025-06-02 00:34:52.364741 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:34:52.365216 | orchestrator | Monday 02 June 2025 00:34:52 +0000 (0:00:00.035) 0:00:05.356 *********** 2025-06-02 00:34:52.365900 | orchestrator | =============================================================================== 2025-06-02 00:34:52.366324 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.04s 2025-06-02 00:34:52.366865 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.59s 2025-06-02 00:34:52.367252 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.55s 2025-06-02 00:34:52.894177 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-02 00:34:54.562438 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:34:54.562553 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:34:54.562599 | orchestrator | Registering Redlock._release_script 2025-06-02 00:34:54.617459 | orchestrator | 2025-06-02 00:34:54 | INFO  | Task 5a7c8756-8820-4172-9adf-f79a090e994c (wait-for-connection) was prepared for execution. 2025-06-02 00:34:54.617552 | orchestrator | 2025-06-02 00:34:54 | INFO  | It takes a moment until task 5a7c8756-8820-4172-9adf-f79a090e994c (wait-for-connection) has been started and output is visible here. 2025-06-02 00:34:58.603838 | orchestrator | 2025-06-02 00:34:58.606324 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-02 00:34:58.606483 | orchestrator | 2025-06-02 00:34:58.607184 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-02 00:34:58.607653 | orchestrator | Monday 02 June 2025 00:34:58 +0000 (0:00:00.220) 0:00:00.220 *********** 2025-06-02 00:35:11.111049 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:35:11.111210 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:35:11.111224 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:35:11.111232 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:35:11.111292 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:35:11.111749 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:35:11.112263 | orchestrator | 2025-06-02 00:35:11.112784 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:35:11.113142 | orchestrator | 2025-06-02 00:35:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:35:11.113359 | orchestrator | 2025-06-02 00:35:11 | INFO  | Please wait and do not abort execution. 2025-06-02 00:35:11.113835 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:35:11.114249 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:35:11.114716 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:35:11.115274 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:35:11.115590 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:35:11.116043 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:35:11.116436 | orchestrator | 2025-06-02 00:35:11.117013 | orchestrator | 2025-06-02 00:35:11.117562 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:35:11.117742 | orchestrator | Monday 02 June 2025 00:35:11 +0000 (0:00:12.508) 0:00:12.728 *********** 2025-06-02 00:35:11.118104 | orchestrator | =============================================================================== 2025-06-02 00:35:11.118458 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.51s 2025-06-02 00:35:11.771254 | orchestrator | + osism apply hddtemp 2025-06-02 00:35:13.457547 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:35:13.457652 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:35:13.457666 | orchestrator | Registering Redlock._release_script 2025-06-02 00:35:13.513182 | orchestrator | 2025-06-02 00:35:13 | INFO  | Task dd93f9c4-e1b7-4c8a-a772-d904bc90a6db (hddtemp) was prepared for execution. 2025-06-02 00:35:13.513259 | orchestrator | 2025-06-02 00:35:13 | INFO  | It takes a moment until task dd93f9c4-e1b7-4c8a-a772-d904bc90a6db (hddtemp) has been started and output is visible here. 2025-06-02 00:35:17.379680 | orchestrator | 2025-06-02 00:35:17.379780 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-02 00:35:17.379796 | orchestrator | 2025-06-02 00:35:17.379808 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-02 00:35:17.379842 | orchestrator | Monday 02 June 2025 00:35:17 +0000 (0:00:00.191) 0:00:00.191 *********** 2025-06-02 00:35:17.493846 | orchestrator | ok: [testbed-manager] 2025-06-02 00:35:17.549986 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:35:17.609923 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:35:17.667857 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:35:17.792969 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:35:17.900948 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:35:17.901689 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:35:17.902760 | orchestrator | 2025-06-02 00:35:17.902819 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-02 00:35:17.903554 | orchestrator | Monday 02 June 2025 00:35:17 +0000 (0:00:00.525) 0:00:00.716 *********** 2025-06-02 00:35:19.139576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:35:19.140062 | orchestrator | 2025-06-02 00:35:19.141092 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-02 00:35:19.142169 | orchestrator | Monday 02 June 2025 00:35:19 +0000 (0:00:01.236) 0:00:01.952 *********** 2025-06-02 00:35:21.061540 | orchestrator | ok: [testbed-manager] 2025-06-02 00:35:21.062584 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:35:21.063054 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:35:21.063851 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:35:21.064372 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:35:21.064702 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:35:21.065247 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:35:21.068388 | orchestrator | 2025-06-02 00:35:21.069159 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-02 00:35:21.070844 | orchestrator | Monday 02 June 2025 00:35:21 +0000 (0:00:01.921) 0:00:03.873 *********** 2025-06-02 00:35:21.660015 | orchestrator | changed: [testbed-manager] 2025-06-02 00:35:21.748520 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:35:22.206353 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:35:22.207684 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:35:22.209182 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:35:22.210521 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:35:22.211450 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:35:22.212534 | orchestrator | 2025-06-02 00:35:22.213507 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-02 00:35:22.214114 | orchestrator | Monday 02 June 2025 00:35:22 +0000 (0:00:01.141) 0:00:05.015 *********** 2025-06-02 00:35:23.290930 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:35:23.292908 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:35:23.294303 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:35:23.295074 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:35:23.296727 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:35:23.298071 | orchestrator | ok: [testbed-manager] 2025-06-02 00:35:23.299242 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:35:23.300762 | orchestrator | 2025-06-02 00:35:23.302204 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-02 00:35:23.303612 | orchestrator | Monday 02 June 2025 00:35:23 +0000 (0:00:01.088) 0:00:06.104 *********** 2025-06-02 00:35:23.718487 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:35:23.792048 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:35:23.869257 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:35:23.953293 | orchestrator | changed: [testbed-manager] 2025-06-02 00:35:24.093389 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:35:24.093813 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:35:24.095494 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:35:24.097386 | orchestrator | 2025-06-02 00:35:24.098608 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-02 00:35:24.099608 | orchestrator | Monday 02 June 2025 00:35:24 +0000 (0:00:00.799) 0:00:06.904 *********** 2025-06-02 00:35:36.563257 | orchestrator | changed: [testbed-manager] 2025-06-02 00:35:36.563394 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:35:36.563411 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:35:36.564178 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:35:36.565961 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:35:36.566956 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:35:36.567970 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:35:36.569234 | orchestrator | 2025-06-02 00:35:36.570386 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-02 00:35:36.570944 | orchestrator | Monday 02 June 2025 00:35:36 +0000 (0:00:12.466) 0:00:19.370 *********** 2025-06-02 00:35:38.057212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:35:38.058776 | orchestrator | 2025-06-02 00:35:38.060326 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-02 00:35:38.061443 | orchestrator | Monday 02 June 2025 00:35:38 +0000 (0:00:01.494) 0:00:20.865 *********** 2025-06-02 00:35:39.977064 | orchestrator | changed: [testbed-manager] 2025-06-02 00:35:39.977933 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:35:39.980336 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:35:39.981815 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:35:39.983277 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:35:39.984748 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:35:39.985696 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:35:39.987122 | orchestrator | 2025-06-02 00:35:39.988776 | orchestrator | 2025-06-02 00:35:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:35:39.988803 | orchestrator | 2025-06-02 00:35:39 | INFO  | Please wait and do not abort execution. 2025-06-02 00:35:39.988898 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:35:39.990119 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:35:39.991186 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:39.992343 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:39.993399 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:39.994670 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:39.995002 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:39.996055 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:39.997216 | orchestrator | 2025-06-02 00:35:39.998088 | orchestrator | 2025-06-02 00:35:39.999027 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:35:39.999580 | orchestrator | Monday 02 June 2025 00:35:39 +0000 (0:00:01.923) 0:00:22.789 *********** 2025-06-02 00:35:40.000619 | orchestrator | =============================================================================== 2025-06-02 00:35:40.001224 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.47s 2025-06-02 00:35:40.001789 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2025-06-02 00:35:40.002393 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.92s 2025-06-02 00:35:40.003065 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.49s 2025-06-02 00:35:40.003608 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.24s 2025-06-02 00:35:40.004054 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.14s 2025-06-02 00:35:40.004582 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.09s 2025-06-02 00:35:40.005051 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.80s 2025-06-02 00:35:40.005640 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.53s 2025-06-02 00:35:40.544880 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-06-02 00:35:42.167889 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 00:35:42.167999 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 00:35:42.168015 | orchestrator | + local max_attempts=60 2025-06-02 00:35:42.168028 | orchestrator | + local name=ceph-ansible 2025-06-02 00:35:42.168039 | orchestrator | + local attempt_num=1 2025-06-02 00:35:42.168050 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 00:35:42.206635 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 00:35:42.206726 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 00:35:42.206741 | orchestrator | + local max_attempts=60 2025-06-02 00:35:42.206752 | orchestrator | + local name=kolla-ansible 2025-06-02 00:35:42.206764 | orchestrator | + local attempt_num=1 2025-06-02 00:35:42.206836 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 00:35:42.230537 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 00:35:42.230611 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 00:35:42.230623 | orchestrator | + local max_attempts=60 2025-06-02 00:35:42.230635 | orchestrator | + local name=osism-ansible 2025-06-02 00:35:42.230646 | orchestrator | + local attempt_num=1 2025-06-02 00:35:42.232419 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 00:35:42.263455 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 00:35:42.263532 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 00:35:42.263544 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 00:35:42.433709 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-02 00:35:42.560286 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-02 00:35:42.730841 | orchestrator | ARA in osism-ansible already disabled. 2025-06-02 00:35:42.930012 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-02 00:35:42.931254 | orchestrator | + osism apply gather-facts 2025-06-02 00:35:44.588185 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:35:44.588294 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:35:44.588309 | orchestrator | Registering Redlock._release_script 2025-06-02 00:35:44.647349 | orchestrator | 2025-06-02 00:35:44 | INFO  | Task 8c0a380b-544a-4050-b507-d946fd505f7c (gather-facts) was prepared for execution. 2025-06-02 00:35:44.647440 | orchestrator | 2025-06-02 00:35:44 | INFO  | It takes a moment until task 8c0a380b-544a-4050-b507-d946fd505f7c (gather-facts) has been started and output is visible here. 2025-06-02 00:35:48.628336 | orchestrator | 2025-06-02 00:35:48.628527 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 00:35:48.629770 | orchestrator | 2025-06-02 00:35:48.630598 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 00:35:48.631527 | orchestrator | Monday 02 June 2025 00:35:48 +0000 (0:00:00.214) 0:00:00.214 *********** 2025-06-02 00:35:53.665770 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:35:53.666011 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:35:53.667002 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:35:53.668061 | orchestrator | ok: [testbed-manager] 2025-06-02 00:35:53.668806 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:35:53.669524 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:35:53.670116 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:35:53.670516 | orchestrator | 2025-06-02 00:35:53.671021 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 00:35:53.672511 | orchestrator | 2025-06-02 00:35:53.672536 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 00:35:53.672549 | orchestrator | Monday 02 June 2025 00:35:53 +0000 (0:00:05.041) 0:00:05.255 *********** 2025-06-02 00:35:53.826385 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:35:53.897099 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:35:53.971877 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:35:54.046702 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:35:54.127490 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:35:54.159494 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:35:54.160660 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:35:54.161464 | orchestrator | 2025-06-02 00:35:54.162560 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:35:54.163394 | orchestrator | 2025-06-02 00:35:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:35:54.163556 | orchestrator | 2025-06-02 00:35:54 | INFO  | Please wait and do not abort execution. 2025-06-02 00:35:54.164653 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:54.165598 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:54.166644 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:54.167339 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:54.168035 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:54.168428 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:54.168967 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:35:54.169551 | orchestrator | 2025-06-02 00:35:54.169971 | orchestrator | 2025-06-02 00:35:54.170548 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:35:54.171030 | orchestrator | Monday 02 June 2025 00:35:54 +0000 (0:00:00.495) 0:00:05.750 *********** 2025-06-02 00:35:54.171506 | orchestrator | =============================================================================== 2025-06-02 00:35:54.171921 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.04s 2025-06-02 00:35:54.172357 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-06-02 00:35:54.756186 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-02 00:35:54.770586 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-02 00:35:54.784199 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-02 00:35:54.800328 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-02 00:35:54.815591 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-02 00:35:54.832221 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-02 00:35:54.846119 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-02 00:35:54.861132 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-02 00:35:54.873993 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-02 00:35:54.885670 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-02 00:35:54.901752 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-02 00:35:54.914575 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-02 00:35:54.926209 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-02 00:35:54.937118 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-02 00:35:54.948006 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-02 00:35:54.959080 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-02 00:35:54.970133 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-02 00:35:54.981389 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-02 00:35:54.999132 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-02 00:35:55.010340 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-02 00:35:55.021287 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-02 00:35:55.521353 | orchestrator | ok: Runtime: 0:18:17.136255 2025-06-02 00:35:55.629105 | 2025-06-02 00:35:55.629255 | TASK [Deploy services] 2025-06-02 00:35:56.161512 | orchestrator | skipping: Conditional result was False 2025-06-02 00:35:56.186011 | 2025-06-02 00:35:56.186627 | TASK [Deploy in a nutshell] 2025-06-02 00:35:56.891893 | orchestrator | + set -e 2025-06-02 00:35:56.892087 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 00:35:56.892112 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 00:35:56.892134 | orchestrator | ++ INTERACTIVE=false 2025-06-02 00:35:56.892187 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 00:35:56.892202 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 00:35:56.892231 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 00:35:56.892282 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 00:35:56.892311 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 00:35:56.892326 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 00:35:56.892344 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 00:35:56.892371 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 00:35:56.892401 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 00:35:56.892423 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 00:35:56.892448 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 00:35:56.892467 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 00:35:56.892482 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 00:35:56.892493 | orchestrator | ++ export ARA=false 2025-06-02 00:35:56.892505 | orchestrator | ++ ARA=false 2025-06-02 00:35:56.892516 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 00:35:56.892529 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 00:35:56.892540 | orchestrator | ++ export TEMPEST=false 2025-06-02 00:35:56.892551 | orchestrator | ++ TEMPEST=false 2025-06-02 00:35:56.892562 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 00:35:56.892579 | orchestrator | ++ IS_ZUUL=true 2025-06-02 00:35:56.892590 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2025-06-02 00:35:56.892603 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2025-06-02 00:35:56.892614 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 00:35:56.892624 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 00:35:56.892635 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 00:35:56.892647 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 00:35:56.892658 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 00:35:56.892668 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 00:35:56.892680 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 00:35:56.892691 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 00:35:56.892702 | orchestrator | + echo 2025-06-02 00:35:56.892714 | orchestrator | 2025-06-02 00:35:56.892726 | orchestrator | # PULL IMAGES 2025-06-02 00:35:56.892737 | orchestrator | 2025-06-02 00:35:56.892748 | orchestrator | + echo '# PULL IMAGES' 2025-06-02 00:35:56.892759 | orchestrator | + echo 2025-06-02 00:35:56.893617 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-02 00:35:56.943702 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 00:35:56.943818 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-02 00:35:58.570990 | orchestrator | 2025-06-02 00:35:58 | INFO  | Trying to run play pull-images in environment custom 2025-06-02 00:35:58.575199 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:35:58.575272 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:35:58.575287 | orchestrator | Registering Redlock._release_script 2025-06-02 00:35:58.644482 | orchestrator | 2025-06-02 00:35:58 | INFO  | Task 899af8de-1143-4544-aef6-7e4af476f64d (pull-images) was prepared for execution. 2025-06-02 00:35:58.644621 | orchestrator | 2025-06-02 00:35:58 | INFO  | It takes a moment until task 899af8de-1143-4544-aef6-7e4af476f64d (pull-images) has been started and output is visible here. 2025-06-02 00:36:01.957372 | orchestrator | 2025-06-02 00:36:01.959630 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-02 00:36:01.960242 | orchestrator | 2025-06-02 00:36:01.961692 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-02 00:36:01.962536 | orchestrator | Monday 02 June 2025 00:36:01 +0000 (0:00:00.108) 0:00:00.108 *********** 2025-06-02 00:37:07.334396 | orchestrator | changed: [testbed-manager] 2025-06-02 00:37:07.334666 | orchestrator | 2025-06-02 00:37:07.336305 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-02 00:37:07.339626 | orchestrator | Monday 02 June 2025 00:37:07 +0000 (0:01:05.373) 0:01:05.482 *********** 2025-06-02 00:37:59.654011 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-02 00:37:59.654102 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-02 00:37:59.654463 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-02 00:37:59.655435 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-02 00:37:59.655757 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-02 00:37:59.656469 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-02 00:37:59.657453 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-02 00:37:59.658228 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-02 00:37:59.658968 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-02 00:37:59.659553 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-02 00:37:59.660054 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-02 00:37:59.660506 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-02 00:37:59.661044 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-02 00:37:59.661619 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-02 00:37:59.661978 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-02 00:37:59.662579 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-02 00:37:59.663071 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-02 00:37:59.663630 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-02 00:37:59.664061 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-02 00:37:59.664406 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-02 00:37:59.664830 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-02 00:37:59.665246 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-02 00:37:59.665670 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-02 00:37:59.666137 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-02 00:37:59.666510 | orchestrator | 2025-06-02 00:37:59.666857 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:37:59.667109 | orchestrator | 2025-06-02 00:37:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:37:59.667247 | orchestrator | 2025-06-02 00:37:59 | INFO  | Please wait and do not abort execution. 2025-06-02 00:37:59.667697 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:37:59.668059 | orchestrator | 2025-06-02 00:37:59.668491 | orchestrator | 2025-06-02 00:37:59.668976 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:37:59.669269 | orchestrator | Monday 02 June 2025 00:37:59 +0000 (0:00:52.323) 0:01:57.805 *********** 2025-06-02 00:37:59.669756 | orchestrator | =============================================================================== 2025-06-02 00:37:59.669962 | orchestrator | Pull keystone image ---------------------------------------------------- 65.37s 2025-06-02 00:37:59.670367 | orchestrator | Pull other images ------------------------------------------------------ 52.32s 2025-06-02 00:38:01.555951 | orchestrator | 2025-06-02 00:38:01 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-02 00:38:01.559867 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:38:01.559897 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:38:01.559909 | orchestrator | Registering Redlock._release_script 2025-06-02 00:38:01.609363 | orchestrator | 2025-06-02 00:38:01 | INFO  | Task 29b7b082-056e-42a6-9040-362863762720 (wipe-partitions) was prepared for execution. 2025-06-02 00:38:01.609398 | orchestrator | 2025-06-02 00:38:01 | INFO  | It takes a moment until task 29b7b082-056e-42a6-9040-362863762720 (wipe-partitions) has been started and output is visible here. 2025-06-02 00:38:05.529805 | orchestrator | 2025-06-02 00:38:05.529919 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-02 00:38:05.532634 | orchestrator | 2025-06-02 00:38:05.532675 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-02 00:38:05.532688 | orchestrator | Monday 02 June 2025 00:38:05 +0000 (0:00:00.130) 0:00:00.130 *********** 2025-06-02 00:38:06.095156 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:38:06.095789 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:38:06.096111 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:38:06.096894 | orchestrator | 2025-06-02 00:38:06.097408 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-02 00:38:06.098109 | orchestrator | Monday 02 June 2025 00:38:06 +0000 (0:00:00.571) 0:00:00.701 *********** 2025-06-02 00:38:06.244225 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:06.350907 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:06.351001 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:38:06.351018 | orchestrator | 2025-06-02 00:38:06.351031 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-02 00:38:06.351045 | orchestrator | Monday 02 June 2025 00:38:06 +0000 (0:00:00.249) 0:00:00.951 *********** 2025-06-02 00:38:07.209579 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:38:07.209698 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:38:07.210088 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:38:07.210435 | orchestrator | 2025-06-02 00:38:07.213778 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-02 00:38:07.213935 | orchestrator | Monday 02 June 2025 00:38:07 +0000 (0:00:00.862) 0:00:01.813 *********** 2025-06-02 00:38:07.365099 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:07.461551 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:07.461767 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:38:07.462484 | orchestrator | 2025-06-02 00:38:07.462870 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-02 00:38:07.463194 | orchestrator | Monday 02 June 2025 00:38:07 +0000 (0:00:00.254) 0:00:02.068 *********** 2025-06-02 00:38:08.689748 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 00:38:08.691566 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 00:38:08.691598 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 00:38:08.691611 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 00:38:08.692397 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 00:38:08.693390 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 00:38:08.695534 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 00:38:08.696424 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 00:38:08.697288 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 00:38:08.698269 | orchestrator | 2025-06-02 00:38:08.699110 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-02 00:38:08.699671 | orchestrator | Monday 02 June 2025 00:38:08 +0000 (0:00:01.227) 0:00:03.295 *********** 2025-06-02 00:38:10.058127 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 00:38:10.060216 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 00:38:10.061960 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 00:38:10.063960 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 00:38:10.064896 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 00:38:10.066094 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 00:38:10.070410 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 00:38:10.071080 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 00:38:10.071868 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 00:38:10.072874 | orchestrator | 2025-06-02 00:38:10.073466 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-02 00:38:10.074459 | orchestrator | Monday 02 June 2025 00:38:10 +0000 (0:00:01.364) 0:00:04.659 *********** 2025-06-02 00:38:12.352760 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 00:38:12.352852 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 00:38:12.352868 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 00:38:12.355738 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 00:38:12.356481 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 00:38:12.357545 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 00:38:12.358595 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 00:38:12.359093 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 00:38:12.360062 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 00:38:12.361027 | orchestrator | 2025-06-02 00:38:12.361391 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-02 00:38:12.362221 | orchestrator | Monday 02 June 2025 00:38:12 +0000 (0:00:02.298) 0:00:06.957 *********** 2025-06-02 00:38:12.928954 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:38:12.932025 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:38:12.932112 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:38:12.933129 | orchestrator | 2025-06-02 00:38:12.933770 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-02 00:38:12.935398 | orchestrator | Monday 02 June 2025 00:38:12 +0000 (0:00:00.574) 0:00:07.532 *********** 2025-06-02 00:38:13.526115 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:38:13.526276 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:38:13.526406 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:38:13.526666 | orchestrator | 2025-06-02 00:38:13.527066 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:38:13.530590 | orchestrator | 2025-06-02 00:38:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:38:13.530624 | orchestrator | 2025-06-02 00:38:13 | INFO  | Please wait and do not abort execution. 2025-06-02 00:38:13.530953 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:38:13.531276 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:38:13.531553 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:38:13.531841 | orchestrator | 2025-06-02 00:38:13.534627 | orchestrator | 2025-06-02 00:38:13.534730 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:38:13.537513 | orchestrator | Monday 02 June 2025 00:38:13 +0000 (0:00:00.598) 0:00:08.130 *********** 2025-06-02 00:38:13.537762 | orchestrator | =============================================================================== 2025-06-02 00:38:13.540578 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.30s 2025-06-02 00:38:13.540611 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2025-06-02 00:38:13.540837 | orchestrator | Check device availability ----------------------------------------------- 1.23s 2025-06-02 00:38:13.541095 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.86s 2025-06-02 00:38:13.541378 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2025-06-02 00:38:13.541647 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2025-06-02 00:38:13.544761 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2025-06-02 00:38:13.544786 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-06-02 00:38:13.545109 | orchestrator | Remove all rook related logical devices --------------------------------- 0.25s 2025-06-02 00:38:15.643185 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:38:15.643257 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:38:15.643815 | orchestrator | Registering Redlock._release_script 2025-06-02 00:38:15.704181 | orchestrator | 2025-06-02 00:38:15 | INFO  | Task 2ee0ed76-e10e-4e9e-8951-e5d437c2a6e6 (facts) was prepared for execution. 2025-06-02 00:38:15.704230 | orchestrator | 2025-06-02 00:38:15 | INFO  | It takes a moment until task 2ee0ed76-e10e-4e9e-8951-e5d437c2a6e6 (facts) has been started and output is visible here. 2025-06-02 00:38:19.049772 | orchestrator | 2025-06-02 00:38:19.049867 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 00:38:19.049883 | orchestrator | 2025-06-02 00:38:19.050094 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 00:38:19.051341 | orchestrator | Monday 02 June 2025 00:38:19 +0000 (0:00:00.217) 0:00:00.217 *********** 2025-06-02 00:38:20.004811 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:38:20.004928 | orchestrator | ok: [testbed-manager] 2025-06-02 00:38:20.006149 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:38:20.006280 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:38:20.008666 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:38:20.009077 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:38:20.009469 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:38:20.011719 | orchestrator | 2025-06-02 00:38:20.011797 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 00:38:20.011817 | orchestrator | Monday 02 June 2025 00:38:19 +0000 (0:00:00.955) 0:00:01.173 *********** 2025-06-02 00:38:20.149865 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:38:20.222802 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:38:20.292761 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:38:20.359886 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:38:20.428728 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:21.038573 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:21.039064 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:38:21.040295 | orchestrator | 2025-06-02 00:38:21.041715 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 00:38:21.042794 | orchestrator | 2025-06-02 00:38:21.047250 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 00:38:21.048192 | orchestrator | Monday 02 June 2025 00:38:21 +0000 (0:00:01.037) 0:00:02.210 *********** 2025-06-02 00:38:25.501420 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:38:25.501998 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:38:25.502787 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:38:25.503592 | orchestrator | ok: [testbed-manager] 2025-06-02 00:38:25.504820 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:38:25.505220 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:38:25.505847 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:38:25.512041 | orchestrator | 2025-06-02 00:38:25.512085 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 00:38:25.512274 | orchestrator | 2025-06-02 00:38:25.515899 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 00:38:25.517055 | orchestrator | Monday 02 June 2025 00:38:25 +0000 (0:00:04.463) 0:00:06.674 *********** 2025-06-02 00:38:25.744427 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:38:25.806397 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:38:25.884813 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:38:25.951915 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:38:26.018423 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:26.052015 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:26.054219 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:38:26.056349 | orchestrator | 2025-06-02 00:38:26.059721 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:38:26.060155 | orchestrator | 2025-06-02 00:38:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:38:26.060182 | orchestrator | 2025-06-02 00:38:26 | INFO  | Please wait and do not abort execution. 2025-06-02 00:38:26.061961 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:38:26.062857 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:38:26.064134 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:38:26.066631 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:38:26.066805 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:38:26.067602 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:38:26.068506 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:38:26.069212 | orchestrator | 2025-06-02 00:38:26.069728 | orchestrator | 2025-06-02 00:38:26.070496 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:38:26.071037 | orchestrator | Monday 02 June 2025 00:38:26 +0000 (0:00:00.550) 0:00:07.225 *********** 2025-06-02 00:38:26.071761 | orchestrator | =============================================================================== 2025-06-02 00:38:26.072045 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.46s 2025-06-02 00:38:26.072561 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2025-06-02 00:38:26.073211 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.96s 2025-06-02 00:38:26.073410 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-06-02 00:38:28.080082 | orchestrator | 2025-06-02 00:38:28 | INFO  | Task 1b619190-f050-4a7f-91a4-4c66ca1d0cdb (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-02 00:38:28.081369 | orchestrator | 2025-06-02 00:38:28 | INFO  | It takes a moment until task 1b619190-f050-4a7f-91a4-4c66ca1d0cdb (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-02 00:38:31.731199 | orchestrator | 2025-06-02 00:38:31.731298 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 00:38:31.731315 | orchestrator | 2025-06-02 00:38:31.731439 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 00:38:31.731534 | orchestrator | Monday 02 June 2025 00:38:31 +0000 (0:00:00.244) 0:00:00.244 *********** 2025-06-02 00:38:31.964104 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 00:38:31.964251 | orchestrator | 2025-06-02 00:38:31.966415 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 00:38:31.966988 | orchestrator | Monday 02 June 2025 00:38:31 +0000 (0:00:00.237) 0:00:00.481 *********** 2025-06-02 00:38:32.169226 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:38:32.169560 | orchestrator | 2025-06-02 00:38:32.170517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:32.170891 | orchestrator | Monday 02 June 2025 00:38:32 +0000 (0:00:00.204) 0:00:00.686 *********** 2025-06-02 00:38:32.485691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 00:38:32.485827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 00:38:32.485912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 00:38:32.489526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 00:38:32.489559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 00:38:32.489571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 00:38:32.489582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 00:38:32.489618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 00:38:32.490402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 00:38:32.491191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 00:38:32.491933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 00:38:32.492579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 00:38:32.493129 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 00:38:32.493879 | orchestrator | 2025-06-02 00:38:32.495116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:32.495213 | orchestrator | Monday 02 June 2025 00:38:32 +0000 (0:00:00.318) 0:00:01.004 *********** 2025-06-02 00:38:32.966550 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:32.966991 | orchestrator | 2025-06-02 00:38:32.967831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:32.972049 | orchestrator | Monday 02 June 2025 00:38:32 +0000 (0:00:00.480) 0:00:01.485 *********** 2025-06-02 00:38:33.120261 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:33.120802 | orchestrator | 2025-06-02 00:38:33.121999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:33.122247 | orchestrator | Monday 02 June 2025 00:38:33 +0000 (0:00:00.154) 0:00:01.640 *********** 2025-06-02 00:38:33.253502 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:33.257624 | orchestrator | 2025-06-02 00:38:33.258117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:33.258644 | orchestrator | Monday 02 June 2025 00:38:33 +0000 (0:00:00.134) 0:00:01.774 *********** 2025-06-02 00:38:33.477993 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:33.478229 | orchestrator | 2025-06-02 00:38:33.478673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:33.479169 | orchestrator | Monday 02 June 2025 00:38:33 +0000 (0:00:00.223) 0:00:01.997 *********** 2025-06-02 00:38:33.664737 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:33.664820 | orchestrator | 2025-06-02 00:38:33.664949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:33.666143 | orchestrator | Monday 02 June 2025 00:38:33 +0000 (0:00:00.185) 0:00:02.183 *********** 2025-06-02 00:38:33.832829 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:33.833652 | orchestrator | 2025-06-02 00:38:33.835222 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:33.835445 | orchestrator | Monday 02 June 2025 00:38:33 +0000 (0:00:00.168) 0:00:02.351 *********** 2025-06-02 00:38:34.016066 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:34.017118 | orchestrator | 2025-06-02 00:38:34.018831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:34.019762 | orchestrator | Monday 02 June 2025 00:38:34 +0000 (0:00:00.183) 0:00:02.535 *********** 2025-06-02 00:38:34.183970 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:34.184797 | orchestrator | 2025-06-02 00:38:34.186843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:34.188024 | orchestrator | Monday 02 June 2025 00:38:34 +0000 (0:00:00.168) 0:00:02.703 *********** 2025-06-02 00:38:34.715999 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f) 2025-06-02 00:38:34.716117 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f) 2025-06-02 00:38:34.716588 | orchestrator | 2025-06-02 00:38:34.717252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:34.719534 | orchestrator | Monday 02 June 2025 00:38:34 +0000 (0:00:00.528) 0:00:03.231 *********** 2025-06-02 00:38:35.133905 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e440bdc3-1867-4817-abfb-a8a36f681931) 2025-06-02 00:38:35.136100 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e440bdc3-1867-4817-abfb-a8a36f681931) 2025-06-02 00:38:35.137150 | orchestrator | 2025-06-02 00:38:35.141552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:35.142103 | orchestrator | Monday 02 June 2025 00:38:35 +0000 (0:00:00.418) 0:00:03.650 *********** 2025-06-02 00:38:35.927082 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ee21d93f-61cf-428c-a6c4-8efe670724e1) 2025-06-02 00:38:35.927191 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ee21d93f-61cf-428c-a6c4-8efe670724e1) 2025-06-02 00:38:35.929078 | orchestrator | 2025-06-02 00:38:35.930106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:35.931584 | orchestrator | Monday 02 June 2025 00:38:35 +0000 (0:00:00.796) 0:00:04.447 *********** 2025-06-02 00:38:36.505232 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fa9fa188-919c-438b-be4f-34a22a00bea2) 2025-06-02 00:38:36.505316 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fa9fa188-919c-438b-be4f-34a22a00bea2) 2025-06-02 00:38:36.508087 | orchestrator | 2025-06-02 00:38:36.508447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:36.508748 | orchestrator | Monday 02 June 2025 00:38:36 +0000 (0:00:00.577) 0:00:05.024 *********** 2025-06-02 00:38:37.141811 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 00:38:37.142694 | orchestrator | 2025-06-02 00:38:37.143192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:37.144075 | orchestrator | Monday 02 June 2025 00:38:37 +0000 (0:00:00.638) 0:00:05.662 *********** 2025-06-02 00:38:37.484781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 00:38:37.486272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 00:38:37.487070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 00:38:37.487607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 00:38:37.487860 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 00:38:37.488404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 00:38:37.488795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 00:38:37.489137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 00:38:37.489429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 00:38:37.489978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 00:38:37.490300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 00:38:37.490580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 00:38:37.491070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 00:38:37.491457 | orchestrator | 2025-06-02 00:38:37.491776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:37.492113 | orchestrator | Monday 02 June 2025 00:38:37 +0000 (0:00:00.340) 0:00:06.003 *********** 2025-06-02 00:38:37.693282 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:37.693565 | orchestrator | 2025-06-02 00:38:37.693645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:37.694233 | orchestrator | Monday 02 June 2025 00:38:37 +0000 (0:00:00.210) 0:00:06.213 *********** 2025-06-02 00:38:37.870247 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:37.872139 | orchestrator | 2025-06-02 00:38:37.872181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:37.873892 | orchestrator | Monday 02 June 2025 00:38:37 +0000 (0:00:00.174) 0:00:06.388 *********** 2025-06-02 00:38:38.061179 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:38.061254 | orchestrator | 2025-06-02 00:38:38.061269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:38.061712 | orchestrator | Monday 02 June 2025 00:38:38 +0000 (0:00:00.192) 0:00:06.580 *********** 2025-06-02 00:38:38.251723 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:38.252934 | orchestrator | 2025-06-02 00:38:38.253975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:38.254124 | orchestrator | Monday 02 June 2025 00:38:38 +0000 (0:00:00.191) 0:00:06.771 *********** 2025-06-02 00:38:38.420808 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:38.420888 | orchestrator | 2025-06-02 00:38:38.421710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:38.422887 | orchestrator | Monday 02 June 2025 00:38:38 +0000 (0:00:00.168) 0:00:06.939 *********** 2025-06-02 00:38:38.618470 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:38.619061 | orchestrator | 2025-06-02 00:38:38.620443 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:38.620474 | orchestrator | Monday 02 June 2025 00:38:38 +0000 (0:00:00.197) 0:00:07.137 *********** 2025-06-02 00:38:38.815090 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:38.815660 | orchestrator | 2025-06-02 00:38:38.816325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:38.816943 | orchestrator | Monday 02 June 2025 00:38:38 +0000 (0:00:00.197) 0:00:07.335 *********** 2025-06-02 00:38:38.995910 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:38.996539 | orchestrator | 2025-06-02 00:38:38.999754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:38.999799 | orchestrator | Monday 02 June 2025 00:38:38 +0000 (0:00:00.179) 0:00:07.514 *********** 2025-06-02 00:38:39.768573 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 00:38:39.769628 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 00:38:39.773203 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 00:38:39.774514 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 00:38:39.775378 | orchestrator | 2025-06-02 00:38:39.776269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:39.780155 | orchestrator | Monday 02 June 2025 00:38:39 +0000 (0:00:00.772) 0:00:08.287 *********** 2025-06-02 00:38:39.966301 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:39.966839 | orchestrator | 2025-06-02 00:38:39.967524 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:39.968585 | orchestrator | Monday 02 June 2025 00:38:39 +0000 (0:00:00.199) 0:00:08.486 *********** 2025-06-02 00:38:40.179121 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:40.179627 | orchestrator | 2025-06-02 00:38:40.180488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:40.180938 | orchestrator | Monday 02 June 2025 00:38:40 +0000 (0:00:00.210) 0:00:08.697 *********** 2025-06-02 00:38:40.368687 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:40.369540 | orchestrator | 2025-06-02 00:38:40.369900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:40.370389 | orchestrator | Monday 02 June 2025 00:38:40 +0000 (0:00:00.191) 0:00:08.888 *********** 2025-06-02 00:38:40.572250 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:40.573794 | orchestrator | 2025-06-02 00:38:40.574088 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 00:38:40.574587 | orchestrator | Monday 02 June 2025 00:38:40 +0000 (0:00:00.201) 0:00:09.089 *********** 2025-06-02 00:38:40.723748 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-02 00:38:40.725108 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-02 00:38:40.726502 | orchestrator | 2025-06-02 00:38:40.726534 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 00:38:40.727386 | orchestrator | Monday 02 June 2025 00:38:40 +0000 (0:00:00.154) 0:00:09.244 *********** 2025-06-02 00:38:40.845383 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:40.845876 | orchestrator | 2025-06-02 00:38:40.846302 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 00:38:40.847597 | orchestrator | Monday 02 June 2025 00:38:40 +0000 (0:00:00.121) 0:00:09.365 *********** 2025-06-02 00:38:40.987684 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:40.988580 | orchestrator | 2025-06-02 00:38:40.988956 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 00:38:40.989622 | orchestrator | Monday 02 June 2025 00:38:40 +0000 (0:00:00.142) 0:00:09.508 *********** 2025-06-02 00:38:41.113601 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:41.114098 | orchestrator | 2025-06-02 00:38:41.114445 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 00:38:41.115235 | orchestrator | Monday 02 June 2025 00:38:41 +0000 (0:00:00.125) 0:00:09.634 *********** 2025-06-02 00:38:41.248804 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:38:41.249722 | orchestrator | 2025-06-02 00:38:41.250697 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 00:38:41.254519 | orchestrator | Monday 02 June 2025 00:38:41 +0000 (0:00:00.134) 0:00:09.768 *********** 2025-06-02 00:38:41.422961 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3a2aacf8-31c8-546a-a559-f7f9618b27d4'}}) 2025-06-02 00:38:41.424368 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1905453d-e612-5c47-8424-6bc4888ba216'}}) 2025-06-02 00:38:41.425100 | orchestrator | 2025-06-02 00:38:41.425718 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 00:38:41.426885 | orchestrator | Monday 02 June 2025 00:38:41 +0000 (0:00:00.171) 0:00:09.940 *********** 2025-06-02 00:38:41.618706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3a2aacf8-31c8-546a-a559-f7f9618b27d4'}})  2025-06-02 00:38:41.619143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1905453d-e612-5c47-8424-6bc4888ba216'}})  2025-06-02 00:38:41.620649 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:41.621244 | orchestrator | 2025-06-02 00:38:41.623186 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 00:38:41.623699 | orchestrator | Monday 02 June 2025 00:38:41 +0000 (0:00:00.197) 0:00:10.137 *********** 2025-06-02 00:38:42.024990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3a2aacf8-31c8-546a-a559-f7f9618b27d4'}})  2025-06-02 00:38:42.026540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1905453d-e612-5c47-8424-6bc4888ba216'}})  2025-06-02 00:38:42.026568 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:42.030548 | orchestrator | 2025-06-02 00:38:42.030589 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 00:38:42.030652 | orchestrator | Monday 02 June 2025 00:38:42 +0000 (0:00:00.403) 0:00:10.541 *********** 2025-06-02 00:38:42.173036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3a2aacf8-31c8-546a-a559-f7f9618b27d4'}})  2025-06-02 00:38:42.175943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1905453d-e612-5c47-8424-6bc4888ba216'}})  2025-06-02 00:38:42.176529 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:42.176920 | orchestrator | 2025-06-02 00:38:42.177486 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 00:38:42.177799 | orchestrator | Monday 02 June 2025 00:38:42 +0000 (0:00:00.149) 0:00:10.690 *********** 2025-06-02 00:38:42.313870 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:38:42.314000 | orchestrator | 2025-06-02 00:38:42.314074 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 00:38:42.314089 | orchestrator | Monday 02 June 2025 00:38:42 +0000 (0:00:00.141) 0:00:10.831 *********** 2025-06-02 00:38:42.452671 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:38:42.455308 | orchestrator | 2025-06-02 00:38:42.455735 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 00:38:42.457283 | orchestrator | Monday 02 June 2025 00:38:42 +0000 (0:00:00.139) 0:00:10.971 *********** 2025-06-02 00:38:42.571136 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:42.572569 | orchestrator | 2025-06-02 00:38:42.573509 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 00:38:42.574819 | orchestrator | Monday 02 June 2025 00:38:42 +0000 (0:00:00.119) 0:00:11.090 *********** 2025-06-02 00:38:42.728747 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:42.728909 | orchestrator | 2025-06-02 00:38:42.729569 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 00:38:42.729813 | orchestrator | Monday 02 June 2025 00:38:42 +0000 (0:00:00.156) 0:00:11.247 *********** 2025-06-02 00:38:42.869670 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:42.870838 | orchestrator | 2025-06-02 00:38:42.873041 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 00:38:42.873076 | orchestrator | Monday 02 June 2025 00:38:42 +0000 (0:00:00.140) 0:00:11.387 *********** 2025-06-02 00:38:43.058707 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 00:38:43.060799 | orchestrator |  "ceph_osd_devices": { 2025-06-02 00:38:43.065445 | orchestrator |  "sdb": { 2025-06-02 00:38:43.065488 | orchestrator |  "osd_lvm_uuid": "3a2aacf8-31c8-546a-a559-f7f9618b27d4" 2025-06-02 00:38:43.065512 | orchestrator |  }, 2025-06-02 00:38:43.065701 | orchestrator |  "sdc": { 2025-06-02 00:38:43.066817 | orchestrator |  "osd_lvm_uuid": "1905453d-e612-5c47-8424-6bc4888ba216" 2025-06-02 00:38:43.067387 | orchestrator |  } 2025-06-02 00:38:43.067906 | orchestrator |  } 2025-06-02 00:38:43.068471 | orchestrator | } 2025-06-02 00:38:43.068991 | orchestrator | 2025-06-02 00:38:43.069569 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 00:38:43.070431 | orchestrator | Monday 02 June 2025 00:38:43 +0000 (0:00:00.188) 0:00:11.576 *********** 2025-06-02 00:38:43.203219 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:43.203772 | orchestrator | 2025-06-02 00:38:43.205200 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 00:38:43.210392 | orchestrator | Monday 02 June 2025 00:38:43 +0000 (0:00:00.141) 0:00:11.718 *********** 2025-06-02 00:38:43.401210 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:43.404474 | orchestrator | 2025-06-02 00:38:43.406165 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 00:38:43.406905 | orchestrator | Monday 02 June 2025 00:38:43 +0000 (0:00:00.200) 0:00:11.918 *********** 2025-06-02 00:38:43.541077 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:38:43.542272 | orchestrator | 2025-06-02 00:38:43.544020 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 00:38:43.544900 | orchestrator | Monday 02 June 2025 00:38:43 +0000 (0:00:00.140) 0:00:12.059 *********** 2025-06-02 00:38:43.762291 | orchestrator | changed: [testbed-node-3] => { 2025-06-02 00:38:43.762931 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 00:38:43.763486 | orchestrator |  "ceph_osd_devices": { 2025-06-02 00:38:43.767024 | orchestrator |  "sdb": { 2025-06-02 00:38:43.767541 | orchestrator |  "osd_lvm_uuid": "3a2aacf8-31c8-546a-a559-f7f9618b27d4" 2025-06-02 00:38:43.768275 | orchestrator |  }, 2025-06-02 00:38:43.769941 | orchestrator |  "sdc": { 2025-06-02 00:38:43.770522 | orchestrator |  "osd_lvm_uuid": "1905453d-e612-5c47-8424-6bc4888ba216" 2025-06-02 00:38:43.772072 | orchestrator |  } 2025-06-02 00:38:43.772776 | orchestrator |  }, 2025-06-02 00:38:43.773432 | orchestrator |  "lvm_volumes": [ 2025-06-02 00:38:43.774768 | orchestrator |  { 2025-06-02 00:38:43.776043 | orchestrator |  "data": "osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4", 2025-06-02 00:38:43.776366 | orchestrator |  "data_vg": "ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4" 2025-06-02 00:38:43.777405 | orchestrator |  }, 2025-06-02 00:38:43.777596 | orchestrator |  { 2025-06-02 00:38:43.780709 | orchestrator |  "data": "osd-block-1905453d-e612-5c47-8424-6bc4888ba216", 2025-06-02 00:38:43.782755 | orchestrator |  "data_vg": "ceph-1905453d-e612-5c47-8424-6bc4888ba216" 2025-06-02 00:38:43.783873 | orchestrator |  } 2025-06-02 00:38:43.785689 | orchestrator |  ] 2025-06-02 00:38:43.787477 | orchestrator |  } 2025-06-02 00:38:43.790286 | orchestrator | } 2025-06-02 00:38:43.791191 | orchestrator | 2025-06-02 00:38:43.792108 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 00:38:43.793268 | orchestrator | Monday 02 June 2025 00:38:43 +0000 (0:00:00.219) 0:00:12.279 *********** 2025-06-02 00:38:46.003235 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 00:38:46.003412 | orchestrator | 2025-06-02 00:38:46.003431 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 00:38:46.003442 | orchestrator | 2025-06-02 00:38:46.004530 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 00:38:46.004562 | orchestrator | Monday 02 June 2025 00:38:45 +0000 (0:00:02.234) 0:00:14.513 *********** 2025-06-02 00:38:46.331186 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 00:38:46.331378 | orchestrator | 2025-06-02 00:38:46.333821 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 00:38:46.334564 | orchestrator | Monday 02 June 2025 00:38:46 +0000 (0:00:00.335) 0:00:14.848 *********** 2025-06-02 00:38:46.603548 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:38:46.607939 | orchestrator | 2025-06-02 00:38:46.611010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:46.611939 | orchestrator | Monday 02 June 2025 00:38:46 +0000 (0:00:00.272) 0:00:15.121 *********** 2025-06-02 00:38:46.991320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 00:38:46.991865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 00:38:46.992248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 00:38:46.992806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 00:38:46.996699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 00:38:46.998240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 00:38:46.998683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 00:38:46.999054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 00:38:46.999263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 00:38:46.999744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 00:38:47.000055 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 00:38:47.000415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 00:38:47.000861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 00:38:47.001250 | orchestrator | 2025-06-02 00:38:47.001443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:47.001816 | orchestrator | Monday 02 June 2025 00:38:46 +0000 (0:00:00.386) 0:00:15.508 *********** 2025-06-02 00:38:47.183835 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:47.184804 | orchestrator | 2025-06-02 00:38:47.184882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:47.189734 | orchestrator | Monday 02 June 2025 00:38:47 +0000 (0:00:00.193) 0:00:15.701 *********** 2025-06-02 00:38:47.388953 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:47.389603 | orchestrator | 2025-06-02 00:38:47.389957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:47.393728 | orchestrator | Monday 02 June 2025 00:38:47 +0000 (0:00:00.205) 0:00:15.907 *********** 2025-06-02 00:38:47.582319 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:47.584671 | orchestrator | 2025-06-02 00:38:47.584808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:47.585184 | orchestrator | Monday 02 June 2025 00:38:47 +0000 (0:00:00.194) 0:00:16.101 *********** 2025-06-02 00:38:47.790323 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:47.791465 | orchestrator | 2025-06-02 00:38:47.791615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:47.793853 | orchestrator | Monday 02 June 2025 00:38:47 +0000 (0:00:00.207) 0:00:16.309 *********** 2025-06-02 00:38:48.301940 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:48.302646 | orchestrator | 2025-06-02 00:38:48.302763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:48.303748 | orchestrator | Monday 02 June 2025 00:38:48 +0000 (0:00:00.510) 0:00:16.819 *********** 2025-06-02 00:38:48.478713 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:48.478877 | orchestrator | 2025-06-02 00:38:48.479235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:48.481817 | orchestrator | Monday 02 June 2025 00:38:48 +0000 (0:00:00.179) 0:00:16.999 *********** 2025-06-02 00:38:48.628544 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:48.629497 | orchestrator | 2025-06-02 00:38:48.629891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:48.631396 | orchestrator | Monday 02 June 2025 00:38:48 +0000 (0:00:00.150) 0:00:17.149 *********** 2025-06-02 00:38:48.794147 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:48.794237 | orchestrator | 2025-06-02 00:38:48.795195 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:48.795267 | orchestrator | Monday 02 June 2025 00:38:48 +0000 (0:00:00.161) 0:00:17.310 *********** 2025-06-02 00:38:49.205878 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04) 2025-06-02 00:38:49.206122 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04) 2025-06-02 00:38:49.206197 | orchestrator | 2025-06-02 00:38:49.206585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:49.207493 | orchestrator | Monday 02 June 2025 00:38:49 +0000 (0:00:00.415) 0:00:17.726 *********** 2025-06-02 00:38:49.668055 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ba4d1aaf-78c8-4549-a686-67bb8e50d69d) 2025-06-02 00:38:49.668296 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ba4d1aaf-78c8-4549-a686-67bb8e50d69d) 2025-06-02 00:38:49.670432 | orchestrator | 2025-06-02 00:38:49.670713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:49.671408 | orchestrator | Monday 02 June 2025 00:38:49 +0000 (0:00:00.460) 0:00:18.186 *********** 2025-06-02 00:38:50.004942 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4ace89ec-5901-4181-9aea-4e5d559a0cfd) 2025-06-02 00:38:50.005489 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4ace89ec-5901-4181-9aea-4e5d559a0cfd) 2025-06-02 00:38:50.008012 | orchestrator | 2025-06-02 00:38:50.008936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:50.009433 | orchestrator | Monday 02 June 2025 00:38:49 +0000 (0:00:00.337) 0:00:18.524 *********** 2025-06-02 00:38:50.507126 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bbd34322-9953-4267-815d-84376d8605a5) 2025-06-02 00:38:50.508045 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bbd34322-9953-4267-815d-84376d8605a5) 2025-06-02 00:38:50.508437 | orchestrator | 2025-06-02 00:38:50.509399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:50.509781 | orchestrator | Monday 02 June 2025 00:38:50 +0000 (0:00:00.499) 0:00:19.024 *********** 2025-06-02 00:38:50.806400 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 00:38:50.807701 | orchestrator | 2025-06-02 00:38:50.807778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:50.809095 | orchestrator | Monday 02 June 2025 00:38:50 +0000 (0:00:00.301) 0:00:19.325 *********** 2025-06-02 00:38:51.083423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 00:38:51.085049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 00:38:51.085978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 00:38:51.086831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 00:38:51.087886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 00:38:51.089000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 00:38:51.089280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 00:38:51.089549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 00:38:51.089882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 00:38:51.090271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 00:38:51.090465 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 00:38:51.090763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 00:38:51.091183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 00:38:51.091378 | orchestrator | 2025-06-02 00:38:51.091706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:51.091913 | orchestrator | Monday 02 June 2025 00:38:51 +0000 (0:00:00.278) 0:00:19.604 *********** 2025-06-02 00:38:51.282263 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:51.283670 | orchestrator | 2025-06-02 00:38:51.284448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:51.285871 | orchestrator | Monday 02 June 2025 00:38:51 +0000 (0:00:00.198) 0:00:19.802 *********** 2025-06-02 00:38:51.818782 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:51.819615 | orchestrator | 2025-06-02 00:38:51.819908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:51.820333 | orchestrator | Monday 02 June 2025 00:38:51 +0000 (0:00:00.537) 0:00:20.339 *********** 2025-06-02 00:38:51.984879 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:51.985826 | orchestrator | 2025-06-02 00:38:51.986503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:51.988262 | orchestrator | Monday 02 June 2025 00:38:51 +0000 (0:00:00.164) 0:00:20.504 *********** 2025-06-02 00:38:52.165131 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:52.165254 | orchestrator | 2025-06-02 00:38:52.165872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:52.165897 | orchestrator | Monday 02 June 2025 00:38:52 +0000 (0:00:00.179) 0:00:20.683 *********** 2025-06-02 00:38:52.338274 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:52.344500 | orchestrator | 2025-06-02 00:38:52.345818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:52.346514 | orchestrator | Monday 02 June 2025 00:38:52 +0000 (0:00:00.173) 0:00:20.856 *********** 2025-06-02 00:38:52.499261 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:52.500233 | orchestrator | 2025-06-02 00:38:52.502817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:52.503502 | orchestrator | Monday 02 June 2025 00:38:52 +0000 (0:00:00.160) 0:00:21.017 *********** 2025-06-02 00:38:52.677119 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:52.677668 | orchestrator | 2025-06-02 00:38:52.679990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:52.680892 | orchestrator | Monday 02 June 2025 00:38:52 +0000 (0:00:00.177) 0:00:21.195 *********** 2025-06-02 00:38:52.840246 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:52.840658 | orchestrator | 2025-06-02 00:38:52.843884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:52.844222 | orchestrator | Monday 02 June 2025 00:38:52 +0000 (0:00:00.165) 0:00:21.360 *********** 2025-06-02 00:38:53.408781 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 00:38:53.409710 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 00:38:53.410474 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 00:38:53.411366 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 00:38:53.412381 | orchestrator | 2025-06-02 00:38:53.415616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:53.415690 | orchestrator | Monday 02 June 2025 00:38:53 +0000 (0:00:00.568) 0:00:21.928 *********** 2025-06-02 00:38:53.594611 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:53.594766 | orchestrator | 2025-06-02 00:38:53.595882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:53.596734 | orchestrator | Monday 02 June 2025 00:38:53 +0000 (0:00:00.183) 0:00:22.112 *********** 2025-06-02 00:38:53.764145 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:53.765218 | orchestrator | 2025-06-02 00:38:53.766229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:53.766881 | orchestrator | Monday 02 June 2025 00:38:53 +0000 (0:00:00.171) 0:00:22.284 *********** 2025-06-02 00:38:53.930322 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:53.931083 | orchestrator | 2025-06-02 00:38:53.932432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:38:53.933265 | orchestrator | Monday 02 June 2025 00:38:53 +0000 (0:00:00.163) 0:00:22.448 *********** 2025-06-02 00:38:54.120057 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:54.120564 | orchestrator | 2025-06-02 00:38:54.121494 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 00:38:54.122304 | orchestrator | Monday 02 June 2025 00:38:54 +0000 (0:00:00.191) 0:00:22.639 *********** 2025-06-02 00:38:54.373211 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-02 00:38:54.375763 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-02 00:38:54.383122 | orchestrator | 2025-06-02 00:38:54.384316 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 00:38:54.385295 | orchestrator | Monday 02 June 2025 00:38:54 +0000 (0:00:00.253) 0:00:22.892 *********** 2025-06-02 00:38:54.511389 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:54.512593 | orchestrator | 2025-06-02 00:38:54.513795 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 00:38:54.517310 | orchestrator | Monday 02 June 2025 00:38:54 +0000 (0:00:00.138) 0:00:23.030 *********** 2025-06-02 00:38:54.640717 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:54.642246 | orchestrator | 2025-06-02 00:38:54.645655 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 00:38:54.647156 | orchestrator | Monday 02 June 2025 00:38:54 +0000 (0:00:00.129) 0:00:23.159 *********** 2025-06-02 00:38:54.758254 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:54.758642 | orchestrator | 2025-06-02 00:38:54.759997 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 00:38:54.760893 | orchestrator | Monday 02 June 2025 00:38:54 +0000 (0:00:00.118) 0:00:23.278 *********** 2025-06-02 00:38:54.886593 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:38:54.889995 | orchestrator | 2025-06-02 00:38:54.890076 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 00:38:54.890341 | orchestrator | Monday 02 June 2025 00:38:54 +0000 (0:00:00.127) 0:00:23.406 *********** 2025-06-02 00:38:55.044861 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89fe9f69-ec16-58f3-8212-bc080cf4c28c'}}) 2025-06-02 00:38:55.045037 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3a308c11-b64c-503e-b49b-4b3a12050ecf'}}) 2025-06-02 00:38:55.046301 | orchestrator | 2025-06-02 00:38:55.047028 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 00:38:55.051006 | orchestrator | Monday 02 June 2025 00:38:55 +0000 (0:00:00.157) 0:00:23.564 *********** 2025-06-02 00:38:55.175574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89fe9f69-ec16-58f3-8212-bc080cf4c28c'}})  2025-06-02 00:38:55.175652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3a308c11-b64c-503e-b49b-4b3a12050ecf'}})  2025-06-02 00:38:55.176918 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:55.177981 | orchestrator | 2025-06-02 00:38:55.181642 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 00:38:55.182287 | orchestrator | Monday 02 June 2025 00:38:55 +0000 (0:00:00.130) 0:00:23.694 *********** 2025-06-02 00:38:55.313746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89fe9f69-ec16-58f3-8212-bc080cf4c28c'}})  2025-06-02 00:38:55.314669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3a308c11-b64c-503e-b49b-4b3a12050ecf'}})  2025-06-02 00:38:55.315636 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:55.316547 | orchestrator | 2025-06-02 00:38:55.319927 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 00:38:55.320462 | orchestrator | Monday 02 June 2025 00:38:55 +0000 (0:00:00.138) 0:00:23.833 *********** 2025-06-02 00:38:55.447925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89fe9f69-ec16-58f3-8212-bc080cf4c28c'}})  2025-06-02 00:38:55.448997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3a308c11-b64c-503e-b49b-4b3a12050ecf'}})  2025-06-02 00:38:55.450276 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:55.453581 | orchestrator | 2025-06-02 00:38:55.454341 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 00:38:55.455080 | orchestrator | Monday 02 June 2025 00:38:55 +0000 (0:00:00.134) 0:00:23.968 *********** 2025-06-02 00:38:55.575803 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:38:55.575947 | orchestrator | 2025-06-02 00:38:55.576928 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 00:38:55.577278 | orchestrator | Monday 02 June 2025 00:38:55 +0000 (0:00:00.127) 0:00:24.095 *********** 2025-06-02 00:38:55.705687 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:38:55.705762 | orchestrator | 2025-06-02 00:38:55.705777 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 00:38:55.705882 | orchestrator | Monday 02 June 2025 00:38:55 +0000 (0:00:00.128) 0:00:24.223 *********** 2025-06-02 00:38:55.827158 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:55.827637 | orchestrator | 2025-06-02 00:38:55.828304 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 00:38:55.829056 | orchestrator | Monday 02 June 2025 00:38:55 +0000 (0:00:00.123) 0:00:24.347 *********** 2025-06-02 00:38:56.077873 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:56.078703 | orchestrator | 2025-06-02 00:38:56.079829 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 00:38:56.080421 | orchestrator | Monday 02 June 2025 00:38:56 +0000 (0:00:00.250) 0:00:24.597 *********** 2025-06-02 00:38:56.192716 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:56.193728 | orchestrator | 2025-06-02 00:38:56.193826 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 00:38:56.194074 | orchestrator | Monday 02 June 2025 00:38:56 +0000 (0:00:00.111) 0:00:24.709 *********** 2025-06-02 00:38:56.310597 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 00:38:56.315840 | orchestrator |  "ceph_osd_devices": { 2025-06-02 00:38:56.317502 | orchestrator |  "sdb": { 2025-06-02 00:38:56.319463 | orchestrator |  "osd_lvm_uuid": "89fe9f69-ec16-58f3-8212-bc080cf4c28c" 2025-06-02 00:38:56.322436 | orchestrator |  }, 2025-06-02 00:38:56.323207 | orchestrator |  "sdc": { 2025-06-02 00:38:56.323670 | orchestrator |  "osd_lvm_uuid": "3a308c11-b64c-503e-b49b-4b3a12050ecf" 2025-06-02 00:38:56.324109 | orchestrator |  } 2025-06-02 00:38:56.324791 | orchestrator |  } 2025-06-02 00:38:56.326127 | orchestrator | } 2025-06-02 00:38:56.326853 | orchestrator | 2025-06-02 00:38:56.328124 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 00:38:56.328883 | orchestrator | Monday 02 June 2025 00:38:56 +0000 (0:00:00.120) 0:00:24.829 *********** 2025-06-02 00:38:56.439198 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:56.440761 | orchestrator | 2025-06-02 00:38:56.442137 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 00:38:56.446194 | orchestrator | Monday 02 June 2025 00:38:56 +0000 (0:00:00.129) 0:00:24.959 *********** 2025-06-02 00:38:56.566645 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:56.566822 | orchestrator | 2025-06-02 00:38:56.570799 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 00:38:56.573254 | orchestrator | Monday 02 June 2025 00:38:56 +0000 (0:00:00.124) 0:00:25.083 *********** 2025-06-02 00:38:56.697482 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:38:56.701866 | orchestrator | 2025-06-02 00:38:56.702327 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 00:38:56.703126 | orchestrator | Monday 02 June 2025 00:38:56 +0000 (0:00:00.131) 0:00:25.215 *********** 2025-06-02 00:38:56.898559 | orchestrator | changed: [testbed-node-4] => { 2025-06-02 00:38:56.898629 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 00:38:56.901698 | orchestrator |  "ceph_osd_devices": { 2025-06-02 00:38:56.903237 | orchestrator |  "sdb": { 2025-06-02 00:38:56.904404 | orchestrator |  "osd_lvm_uuid": "89fe9f69-ec16-58f3-8212-bc080cf4c28c" 2025-06-02 00:38:56.905699 | orchestrator |  }, 2025-06-02 00:38:56.907420 | orchestrator |  "sdc": { 2025-06-02 00:38:56.910334 | orchestrator |  "osd_lvm_uuid": "3a308c11-b64c-503e-b49b-4b3a12050ecf" 2025-06-02 00:38:56.910900 | orchestrator |  } 2025-06-02 00:38:56.911712 | orchestrator |  }, 2025-06-02 00:38:56.912030 | orchestrator |  "lvm_volumes": [ 2025-06-02 00:38:56.912732 | orchestrator |  { 2025-06-02 00:38:56.912810 | orchestrator |  "data": "osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c", 2025-06-02 00:38:56.913341 | orchestrator |  "data_vg": "ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c" 2025-06-02 00:38:56.913656 | orchestrator |  }, 2025-06-02 00:38:56.915203 | orchestrator |  { 2025-06-02 00:38:56.915509 | orchestrator |  "data": "osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf", 2025-06-02 00:38:56.915868 | orchestrator |  "data_vg": "ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf" 2025-06-02 00:38:56.916132 | orchestrator |  } 2025-06-02 00:38:56.916625 | orchestrator |  ] 2025-06-02 00:38:56.916946 | orchestrator |  } 2025-06-02 00:38:56.917336 | orchestrator | } 2025-06-02 00:38:56.917748 | orchestrator | 2025-06-02 00:38:56.918106 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 00:38:56.918409 | orchestrator | Monday 02 June 2025 00:38:56 +0000 (0:00:00.200) 0:00:25.416 *********** 2025-06-02 00:38:57.815564 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 00:38:57.816401 | orchestrator | 2025-06-02 00:38:57.818236 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 00:38:57.819561 | orchestrator | 2025-06-02 00:38:57.820083 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 00:38:57.820474 | orchestrator | Monday 02 June 2025 00:38:57 +0000 (0:00:00.918) 0:00:26.334 *********** 2025-06-02 00:38:58.318304 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 00:38:58.319708 | orchestrator | 2025-06-02 00:38:58.320326 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 00:38:58.321812 | orchestrator | Monday 02 June 2025 00:38:58 +0000 (0:00:00.502) 0:00:26.837 *********** 2025-06-02 00:38:58.836907 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:38:58.837102 | orchestrator | 2025-06-02 00:38:58.837794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:58.838210 | orchestrator | Monday 02 June 2025 00:38:58 +0000 (0:00:00.520) 0:00:27.357 *********** 2025-06-02 00:38:59.137205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 00:38:59.137497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 00:38:59.138679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 00:38:59.138910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 00:38:59.140465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 00:38:59.140492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 00:38:59.140503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 00:38:59.140514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 00:38:59.140638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 00:38:59.141452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 00:38:59.141477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 00:38:59.141534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 00:38:59.141965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 00:38:59.142254 | orchestrator | 2025-06-02 00:38:59.142541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:59.142980 | orchestrator | Monday 02 June 2025 00:38:59 +0000 (0:00:00.299) 0:00:27.657 *********** 2025-06-02 00:38:59.308403 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:38:59.308479 | orchestrator | 2025-06-02 00:38:59.308493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:59.308617 | orchestrator | Monday 02 June 2025 00:38:59 +0000 (0:00:00.170) 0:00:27.828 *********** 2025-06-02 00:38:59.479503 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:38:59.481005 | orchestrator | 2025-06-02 00:38:59.481990 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:59.485804 | orchestrator | Monday 02 June 2025 00:38:59 +0000 (0:00:00.171) 0:00:27.999 *********** 2025-06-02 00:38:59.666714 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:38:59.668504 | orchestrator | 2025-06-02 00:38:59.669392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:59.671831 | orchestrator | Monday 02 June 2025 00:38:59 +0000 (0:00:00.185) 0:00:28.185 *********** 2025-06-02 00:38:59.841156 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:38:59.842823 | orchestrator | 2025-06-02 00:38:59.843830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:38:59.844693 | orchestrator | Monday 02 June 2025 00:38:59 +0000 (0:00:00.175) 0:00:28.360 *********** 2025-06-02 00:39:00.020450 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:00.021237 | orchestrator | 2025-06-02 00:39:00.025446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:39:00.025522 | orchestrator | Monday 02 June 2025 00:39:00 +0000 (0:00:00.178) 0:00:28.539 *********** 2025-06-02 00:39:00.201165 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:00.202163 | orchestrator | 2025-06-02 00:39:00.202887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:39:00.203941 | orchestrator | Monday 02 June 2025 00:39:00 +0000 (0:00:00.180) 0:00:28.719 *********** 2025-06-02 00:39:00.366735 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:00.367329 | orchestrator | 2025-06-02 00:39:00.367568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:39:00.368237 | orchestrator | Monday 02 June 2025 00:39:00 +0000 (0:00:00.167) 0:00:28.887 *********** 2025-06-02 00:39:00.546894 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:00.547894 | orchestrator | 2025-06-02 00:39:00.548785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:39:00.549476 | orchestrator | Monday 02 June 2025 00:39:00 +0000 (0:00:00.179) 0:00:29.067 *********** 2025-06-02 00:39:01.060098 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119) 2025-06-02 00:39:01.062001 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119) 2025-06-02 00:39:01.064974 | orchestrator | 2025-06-02 00:39:01.069966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:39:01.071646 | orchestrator | Monday 02 June 2025 00:39:01 +0000 (0:00:00.511) 0:00:29.578 *********** 2025-06-02 00:39:01.674213 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e8887d11-63ae-4566-a11b-b67b45b1443e) 2025-06-02 00:39:01.674853 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e8887d11-63ae-4566-a11b-b67b45b1443e) 2025-06-02 00:39:01.675284 | orchestrator | 2025-06-02 00:39:01.675800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:39:01.676113 | orchestrator | Monday 02 June 2025 00:39:01 +0000 (0:00:00.616) 0:00:30.195 *********** 2025-06-02 00:39:02.038763 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9dfee06d-65ab-44da-8413-6b371a116172) 2025-06-02 00:39:02.040378 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9dfee06d-65ab-44da-8413-6b371a116172) 2025-06-02 00:39:02.041605 | orchestrator | 2025-06-02 00:39:02.042484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:39:02.044680 | orchestrator | Monday 02 June 2025 00:39:02 +0000 (0:00:00.363) 0:00:30.558 *********** 2025-06-02 00:39:02.418630 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0e09092a-0107-49ca-ae5a-eacfcf6197eb) 2025-06-02 00:39:02.420606 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0e09092a-0107-49ca-ae5a-eacfcf6197eb) 2025-06-02 00:39:02.422415 | orchestrator | 2025-06-02 00:39:02.422924 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:39:02.423763 | orchestrator | Monday 02 June 2025 00:39:02 +0000 (0:00:00.376) 0:00:30.934 *********** 2025-06-02 00:39:02.696456 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 00:39:02.696919 | orchestrator | 2025-06-02 00:39:02.699191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:02.700107 | orchestrator | Monday 02 June 2025 00:39:02 +0000 (0:00:00.279) 0:00:31.214 *********** 2025-06-02 00:39:03.040919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 00:39:03.043649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 00:39:03.047490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 00:39:03.047521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 00:39:03.047693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 00:39:03.048080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 00:39:03.049414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 00:39:03.049795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 00:39:03.050118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 00:39:03.050861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 00:39:03.051135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 00:39:03.051495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 00:39:03.052071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 00:39:03.052496 | orchestrator | 2025-06-02 00:39:03.053152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:03.053494 | orchestrator | Monday 02 June 2025 00:39:03 +0000 (0:00:00.346) 0:00:31.561 *********** 2025-06-02 00:39:03.219769 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:03.223871 | orchestrator | 2025-06-02 00:39:03.225636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:03.229477 | orchestrator | Monday 02 June 2025 00:39:03 +0000 (0:00:00.178) 0:00:31.739 *********** 2025-06-02 00:39:03.397125 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:03.397200 | orchestrator | 2025-06-02 00:39:03.398749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:03.398806 | orchestrator | Monday 02 June 2025 00:39:03 +0000 (0:00:00.175) 0:00:31.915 *********** 2025-06-02 00:39:03.608142 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:03.610735 | orchestrator | 2025-06-02 00:39:03.611610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:03.612723 | orchestrator | Monday 02 June 2025 00:39:03 +0000 (0:00:00.209) 0:00:32.125 *********** 2025-06-02 00:39:03.774332 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:03.776540 | orchestrator | 2025-06-02 00:39:03.780212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:03.780247 | orchestrator | Monday 02 June 2025 00:39:03 +0000 (0:00:00.166) 0:00:32.292 *********** 2025-06-02 00:39:03.944064 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:03.947277 | orchestrator | 2025-06-02 00:39:03.947564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:03.948905 | orchestrator | Monday 02 June 2025 00:39:03 +0000 (0:00:00.170) 0:00:32.463 *********** 2025-06-02 00:39:04.416963 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:04.419319 | orchestrator | 2025-06-02 00:39:04.419876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:04.421431 | orchestrator | Monday 02 June 2025 00:39:04 +0000 (0:00:00.471) 0:00:32.934 *********** 2025-06-02 00:39:04.597873 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:04.598429 | orchestrator | 2025-06-02 00:39:04.599694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:04.601475 | orchestrator | Monday 02 June 2025 00:39:04 +0000 (0:00:00.182) 0:00:33.117 *********** 2025-06-02 00:39:04.774532 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:04.774937 | orchestrator | 2025-06-02 00:39:04.775673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:04.776206 | orchestrator | Monday 02 June 2025 00:39:04 +0000 (0:00:00.175) 0:00:33.293 *********** 2025-06-02 00:39:05.345912 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 00:39:05.347303 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 00:39:05.348332 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 00:39:05.349948 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 00:39:05.350625 | orchestrator | 2025-06-02 00:39:05.351729 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:05.351881 | orchestrator | Monday 02 June 2025 00:39:05 +0000 (0:00:00.571) 0:00:33.864 *********** 2025-06-02 00:39:05.559736 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:05.560937 | orchestrator | 2025-06-02 00:39:05.561864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:05.563157 | orchestrator | Monday 02 June 2025 00:39:05 +0000 (0:00:00.213) 0:00:34.078 *********** 2025-06-02 00:39:05.759130 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:05.759771 | orchestrator | 2025-06-02 00:39:05.760070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:05.762551 | orchestrator | Monday 02 June 2025 00:39:05 +0000 (0:00:00.197) 0:00:34.276 *********** 2025-06-02 00:39:05.954560 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:05.955949 | orchestrator | 2025-06-02 00:39:05.956719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:39:05.957597 | orchestrator | Monday 02 June 2025 00:39:05 +0000 (0:00:00.198) 0:00:34.474 *********** 2025-06-02 00:39:06.151995 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:06.152105 | orchestrator | 2025-06-02 00:39:06.152293 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 00:39:06.152597 | orchestrator | Monday 02 June 2025 00:39:06 +0000 (0:00:00.197) 0:00:34.671 *********** 2025-06-02 00:39:06.328549 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-02 00:39:06.329380 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-02 00:39:06.330905 | orchestrator | 2025-06-02 00:39:06.331635 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 00:39:06.333499 | orchestrator | Monday 02 June 2025 00:39:06 +0000 (0:00:00.175) 0:00:34.847 *********** 2025-06-02 00:39:06.449993 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:06.450884 | orchestrator | 2025-06-02 00:39:06.452053 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 00:39:06.452939 | orchestrator | Monday 02 June 2025 00:39:06 +0000 (0:00:00.121) 0:00:34.968 *********** 2025-06-02 00:39:06.583474 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:06.584052 | orchestrator | 2025-06-02 00:39:06.584927 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 00:39:06.587309 | orchestrator | Monday 02 June 2025 00:39:06 +0000 (0:00:00.133) 0:00:35.101 *********** 2025-06-02 00:39:06.710137 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:06.711070 | orchestrator | 2025-06-02 00:39:06.712255 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 00:39:06.713725 | orchestrator | Monday 02 June 2025 00:39:06 +0000 (0:00:00.126) 0:00:35.228 *********** 2025-06-02 00:39:07.039783 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:39:07.040424 | orchestrator | 2025-06-02 00:39:07.041214 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 00:39:07.042659 | orchestrator | Monday 02 June 2025 00:39:07 +0000 (0:00:00.329) 0:00:35.558 *********** 2025-06-02 00:39:07.206547 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'}}) 2025-06-02 00:39:07.206963 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '17a6e190-aa70-5b53-9f6a-9d016360bd22'}}) 2025-06-02 00:39:07.208457 | orchestrator | 2025-06-02 00:39:07.209774 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 00:39:07.210665 | orchestrator | Monday 02 June 2025 00:39:07 +0000 (0:00:00.167) 0:00:35.725 *********** 2025-06-02 00:39:07.362334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'}})  2025-06-02 00:39:07.364814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '17a6e190-aa70-5b53-9f6a-9d016360bd22'}})  2025-06-02 00:39:07.365821 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:07.366494 | orchestrator | 2025-06-02 00:39:07.367429 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 00:39:07.367939 | orchestrator | Monday 02 June 2025 00:39:07 +0000 (0:00:00.153) 0:00:35.878 *********** 2025-06-02 00:39:07.514343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'}})  2025-06-02 00:39:07.514876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '17a6e190-aa70-5b53-9f6a-9d016360bd22'}})  2025-06-02 00:39:07.515687 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:07.518471 | orchestrator | 2025-06-02 00:39:07.518500 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 00:39:07.519078 | orchestrator | Monday 02 June 2025 00:39:07 +0000 (0:00:00.153) 0:00:36.031 *********** 2025-06-02 00:39:07.680627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'}})  2025-06-02 00:39:07.683752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '17a6e190-aa70-5b53-9f6a-9d016360bd22'}})  2025-06-02 00:39:07.683797 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:07.684959 | orchestrator | 2025-06-02 00:39:07.685922 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 00:39:07.686603 | orchestrator | Monday 02 June 2025 00:39:07 +0000 (0:00:00.166) 0:00:36.198 *********** 2025-06-02 00:39:07.823740 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:39:07.824480 | orchestrator | 2025-06-02 00:39:07.825406 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 00:39:07.826277 | orchestrator | Monday 02 June 2025 00:39:07 +0000 (0:00:00.142) 0:00:36.340 *********** 2025-06-02 00:39:07.949970 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:39:07.950550 | orchestrator | 2025-06-02 00:39:07.951268 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 00:39:07.952225 | orchestrator | Monday 02 June 2025 00:39:07 +0000 (0:00:00.128) 0:00:36.469 *********** 2025-06-02 00:39:08.086444 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:08.086819 | orchestrator | 2025-06-02 00:39:08.087939 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 00:39:08.088811 | orchestrator | Monday 02 June 2025 00:39:08 +0000 (0:00:00.134) 0:00:36.603 *********** 2025-06-02 00:39:08.222248 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:08.222918 | orchestrator | 2025-06-02 00:39:08.224008 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 00:39:08.224885 | orchestrator | Monday 02 June 2025 00:39:08 +0000 (0:00:00.136) 0:00:36.740 *********** 2025-06-02 00:39:08.342161 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:08.343090 | orchestrator | 2025-06-02 00:39:08.343622 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 00:39:08.344219 | orchestrator | Monday 02 June 2025 00:39:08 +0000 (0:00:00.121) 0:00:36.861 *********** 2025-06-02 00:39:08.473766 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 00:39:08.475963 | orchestrator |  "ceph_osd_devices": { 2025-06-02 00:39:08.479622 | orchestrator |  "sdb": { 2025-06-02 00:39:08.480354 | orchestrator |  "osd_lvm_uuid": "93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644" 2025-06-02 00:39:08.481298 | orchestrator |  }, 2025-06-02 00:39:08.482532 | orchestrator |  "sdc": { 2025-06-02 00:39:08.483644 | orchestrator |  "osd_lvm_uuid": "17a6e190-aa70-5b53-9f6a-9d016360bd22" 2025-06-02 00:39:08.484332 | orchestrator |  } 2025-06-02 00:39:08.485861 | orchestrator |  } 2025-06-02 00:39:08.486646 | orchestrator | } 2025-06-02 00:39:08.487594 | orchestrator | 2025-06-02 00:39:08.491180 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 00:39:08.491768 | orchestrator | Monday 02 June 2025 00:39:08 +0000 (0:00:00.131) 0:00:36.993 *********** 2025-06-02 00:39:08.608928 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:08.609537 | orchestrator | 2025-06-02 00:39:08.614238 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 00:39:08.614731 | orchestrator | Monday 02 June 2025 00:39:08 +0000 (0:00:00.132) 0:00:37.125 *********** 2025-06-02 00:39:08.938132 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:08.938406 | orchestrator | 2025-06-02 00:39:08.939701 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 00:39:08.940747 | orchestrator | Monday 02 June 2025 00:39:08 +0000 (0:00:00.329) 0:00:37.455 *********** 2025-06-02 00:39:09.081470 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:39:09.082106 | orchestrator | 2025-06-02 00:39:09.082853 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 00:39:09.083701 | orchestrator | Monday 02 June 2025 00:39:09 +0000 (0:00:00.145) 0:00:37.600 *********** 2025-06-02 00:39:09.305924 | orchestrator | changed: [testbed-node-5] => { 2025-06-02 00:39:09.306180 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 00:39:09.308044 | orchestrator |  "ceph_osd_devices": { 2025-06-02 00:39:09.309094 | orchestrator |  "sdb": { 2025-06-02 00:39:09.312414 | orchestrator |  "osd_lvm_uuid": "93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644" 2025-06-02 00:39:09.314400 | orchestrator |  }, 2025-06-02 00:39:09.315647 | orchestrator |  "sdc": { 2025-06-02 00:39:09.316637 | orchestrator |  "osd_lvm_uuid": "17a6e190-aa70-5b53-9f6a-9d016360bd22" 2025-06-02 00:39:09.317293 | orchestrator |  } 2025-06-02 00:39:09.319701 | orchestrator |  }, 2025-06-02 00:39:09.320349 | orchestrator |  "lvm_volumes": [ 2025-06-02 00:39:09.321861 | orchestrator |  { 2025-06-02 00:39:09.322970 | orchestrator |  "data": "osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644", 2025-06-02 00:39:09.324184 | orchestrator |  "data_vg": "ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644" 2025-06-02 00:39:09.324441 | orchestrator |  }, 2025-06-02 00:39:09.325420 | orchestrator |  { 2025-06-02 00:39:09.326279 | orchestrator |  "data": "osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22", 2025-06-02 00:39:09.327069 | orchestrator |  "data_vg": "ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22" 2025-06-02 00:39:09.330248 | orchestrator |  } 2025-06-02 00:39:09.331181 | orchestrator |  ] 2025-06-02 00:39:09.331205 | orchestrator |  } 2025-06-02 00:39:09.331220 | orchestrator | } 2025-06-02 00:39:09.331235 | orchestrator | 2025-06-02 00:39:09.331248 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 00:39:09.331260 | orchestrator | Monday 02 June 2025 00:39:09 +0000 (0:00:00.223) 0:00:37.823 *********** 2025-06-02 00:39:10.300663 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 00:39:10.300892 | orchestrator | 2025-06-02 00:39:10.301521 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:39:10.302804 | orchestrator | 2025-06-02 00:39:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:39:10.302937 | orchestrator | 2025-06-02 00:39:10 | INFO  | Please wait and do not abort execution. 2025-06-02 00:39:10.303034 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 00:39:10.304068 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 00:39:10.304487 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 00:39:10.305346 | orchestrator | 2025-06-02 00:39:10.305893 | orchestrator | 2025-06-02 00:39:10.306357 | orchestrator | 2025-06-02 00:39:10.306989 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:39:10.308108 | orchestrator | Monday 02 June 2025 00:39:10 +0000 (0:00:00.993) 0:00:38.817 *********** 2025-06-02 00:39:10.309088 | orchestrator | =============================================================================== 2025-06-02 00:39:10.309285 | orchestrator | Write configuration file ------------------------------------------------ 4.15s 2025-06-02 00:39:10.311118 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.08s 2025-06-02 00:39:10.313130 | orchestrator | Add known links to the list of available block devices ------------------ 1.01s 2025-06-02 00:39:10.314801 | orchestrator | Get initial list of available block devices ----------------------------- 1.00s 2025-06-02 00:39:10.315796 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2025-06-02 00:39:10.317696 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-06-02 00:39:10.317965 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2025-06-02 00:39:10.319424 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.70s 2025-06-02 00:39:10.320206 | orchestrator | Print DB devices -------------------------------------------------------- 0.65s 2025-06-02 00:39:10.321765 | orchestrator | Print configuration data ------------------------------------------------ 0.64s 2025-06-02 00:39:10.323067 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-06-02 00:39:10.323768 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-06-02 00:39:10.324767 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.59s 2025-06-02 00:39:10.326298 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.58s 2025-06-02 00:39:10.326456 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-06-02 00:39:10.328740 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2025-06-02 00:39:10.329788 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2025-06-02 00:39:10.331392 | orchestrator | Set WAL devices config data --------------------------------------------- 0.54s 2025-06-02 00:39:10.334640 | orchestrator | Add known partitions to the list of available block devices ------------- 0.54s 2025-06-02 00:39:10.335435 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2025-06-02 00:39:22.681449 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:39:22.681566 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:39:22.681585 | orchestrator | Registering Redlock._release_script 2025-06-02 00:39:22.734761 | orchestrator | 2025-06-02 00:39:22 | INFO  | Task d7b1a816-b3f0-4e08-8392-62c22eb97c32 (sync inventory) is running in background. Output coming soon. 2025-06-02 00:40:06.329148 | orchestrator | 2025-06-02 00:39:48 | INFO  | Starting group_vars file reorganization 2025-06-02 00:40:06.329293 | orchestrator | 2025-06-02 00:39:48 | INFO  | Moved 0 file(s) to their respective directories 2025-06-02 00:40:06.329320 | orchestrator | 2025-06-02 00:39:48 | INFO  | Group_vars file reorganization completed 2025-06-02 00:40:06.329340 | orchestrator | 2025-06-02 00:39:50 | INFO  | Starting variable preparation from inventory 2025-06-02 00:40:06.329359 | orchestrator | 2025-06-02 00:39:52 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-02 00:40:06.329377 | orchestrator | 2025-06-02 00:39:52 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-02 00:40:06.329491 | orchestrator | 2025-06-02 00:39:52 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-02 00:40:06.329514 | orchestrator | 2025-06-02 00:39:52 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-02 00:40:06.329533 | orchestrator | 2025-06-02 00:39:52 | INFO  | Variable preparation completed: 2025-06-02 00:40:06.329552 | orchestrator | 2025-06-02 00:39:53 | INFO  | Starting inventory overwrite handling 2025-06-02 00:40:06.329571 | orchestrator | 2025-06-02 00:39:53 | INFO  | Handling group overwrites in 99-overwrite 2025-06-02 00:40:06.329590 | orchestrator | 2025-06-02 00:39:53 | INFO  | Removing group frr:children from 60-generic 2025-06-02 00:40:06.329611 | orchestrator | 2025-06-02 00:39:53 | INFO  | Removing group storage:children from 50-kolla 2025-06-02 00:40:06.329629 | orchestrator | 2025-06-02 00:39:53 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-02 00:40:06.329660 | orchestrator | 2025-06-02 00:39:53 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-02 00:40:06.329681 | orchestrator | 2025-06-02 00:39:53 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-02 00:40:06.329701 | orchestrator | 2025-06-02 00:39:53 | INFO  | Handling group overwrites in 20-roles 2025-06-02 00:40:06.329721 | orchestrator | 2025-06-02 00:39:53 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-02 00:40:06.329741 | orchestrator | 2025-06-02 00:39:53 | INFO  | Removed 6 group(s) in total 2025-06-02 00:40:06.329761 | orchestrator | 2025-06-02 00:39:53 | INFO  | Inventory overwrite handling completed 2025-06-02 00:40:06.329780 | orchestrator | 2025-06-02 00:39:54 | INFO  | Starting merge of inventory files 2025-06-02 00:40:06.329800 | orchestrator | 2025-06-02 00:39:54 | INFO  | Inventory files merged successfully 2025-06-02 00:40:06.329820 | orchestrator | 2025-06-02 00:39:58 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-02 00:40:06.329840 | orchestrator | 2025-06-02 00:40:05 | INFO  | Successfully wrote ClusterShell configuration 2025-06-02 00:40:08.296007 | orchestrator | 2025-06-02 00:40:08 | INFO  | Task 6c672a33-82d5-4a60-b056-8728b4a7df54 (ceph-create-lvm-devices) was prepared for execution. 2025-06-02 00:40:08.296142 | orchestrator | 2025-06-02 00:40:08 | INFO  | It takes a moment until task 6c672a33-82d5-4a60-b056-8728b4a7df54 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-02 00:40:12.197343 | orchestrator | 2025-06-02 00:40:12.199010 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 00:40:12.200875 | orchestrator | 2025-06-02 00:40:12.201897 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 00:40:12.202574 | orchestrator | Monday 02 June 2025 00:40:12 +0000 (0:00:00.225) 0:00:00.225 *********** 2025-06-02 00:40:12.399774 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 00:40:12.401474 | orchestrator | 2025-06-02 00:40:12.403286 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 00:40:12.404837 | orchestrator | Monday 02 June 2025 00:40:12 +0000 (0:00:00.205) 0:00:00.430 *********** 2025-06-02 00:40:12.591595 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:40:12.591733 | orchestrator | 2025-06-02 00:40:12.593264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:12.594728 | orchestrator | Monday 02 June 2025 00:40:12 +0000 (0:00:00.190) 0:00:00.621 *********** 2025-06-02 00:40:12.897821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 00:40:12.897900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 00:40:12.899608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 00:40:12.901316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 00:40:12.903265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 00:40:12.905010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 00:40:12.906166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 00:40:12.907334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 00:40:12.908474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 00:40:12.909560 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 00:40:12.910642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 00:40:12.911753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 00:40:12.912632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 00:40:12.913521 | orchestrator | 2025-06-02 00:40:12.914678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:12.915204 | orchestrator | Monday 02 June 2025 00:40:12 +0000 (0:00:00.306) 0:00:00.927 *********** 2025-06-02 00:40:13.310827 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:13.312649 | orchestrator | 2025-06-02 00:40:13.313718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:13.314976 | orchestrator | Monday 02 June 2025 00:40:13 +0000 (0:00:00.414) 0:00:01.342 *********** 2025-06-02 00:40:13.496918 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:13.498391 | orchestrator | 2025-06-02 00:40:13.499819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:13.502572 | orchestrator | Monday 02 June 2025 00:40:13 +0000 (0:00:00.182) 0:00:01.524 *********** 2025-06-02 00:40:13.678628 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:13.682686 | orchestrator | 2025-06-02 00:40:13.682761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:13.682777 | orchestrator | Monday 02 June 2025 00:40:13 +0000 (0:00:00.184) 0:00:01.709 *********** 2025-06-02 00:40:13.860019 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:13.860403 | orchestrator | 2025-06-02 00:40:13.864488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:13.864521 | orchestrator | Monday 02 June 2025 00:40:13 +0000 (0:00:00.181) 0:00:01.891 *********** 2025-06-02 00:40:14.045046 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:14.047192 | orchestrator | 2025-06-02 00:40:14.048930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:14.050227 | orchestrator | Monday 02 June 2025 00:40:14 +0000 (0:00:00.185) 0:00:02.077 *********** 2025-06-02 00:40:14.239296 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:14.241732 | orchestrator | 2025-06-02 00:40:14.242556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:14.243506 | orchestrator | Monday 02 June 2025 00:40:14 +0000 (0:00:00.192) 0:00:02.269 *********** 2025-06-02 00:40:14.405800 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:14.406203 | orchestrator | 2025-06-02 00:40:14.407105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:14.407826 | orchestrator | Monday 02 June 2025 00:40:14 +0000 (0:00:00.166) 0:00:02.435 *********** 2025-06-02 00:40:14.584622 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:14.584683 | orchestrator | 2025-06-02 00:40:14.585878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:14.586473 | orchestrator | Monday 02 June 2025 00:40:14 +0000 (0:00:00.178) 0:00:02.614 *********** 2025-06-02 00:40:14.977915 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f) 2025-06-02 00:40:14.979406 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f) 2025-06-02 00:40:14.980657 | orchestrator | 2025-06-02 00:40:14.981238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:14.981679 | orchestrator | Monday 02 June 2025 00:40:14 +0000 (0:00:00.395) 0:00:03.010 *********** 2025-06-02 00:40:15.368815 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e440bdc3-1867-4817-abfb-a8a36f681931) 2025-06-02 00:40:15.368904 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e440bdc3-1867-4817-abfb-a8a36f681931) 2025-06-02 00:40:15.368919 | orchestrator | 2025-06-02 00:40:15.368933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:15.368944 | orchestrator | Monday 02 June 2025 00:40:15 +0000 (0:00:00.387) 0:00:03.397 *********** 2025-06-02 00:40:15.850801 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ee21d93f-61cf-428c-a6c4-8efe670724e1) 2025-06-02 00:40:15.851864 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ee21d93f-61cf-428c-a6c4-8efe670724e1) 2025-06-02 00:40:15.852387 | orchestrator | 2025-06-02 00:40:15.853216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:15.854071 | orchestrator | Monday 02 June 2025 00:40:15 +0000 (0:00:00.485) 0:00:03.883 *********** 2025-06-02 00:40:16.407072 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fa9fa188-919c-438b-be4f-34a22a00bea2) 2025-06-02 00:40:16.408016 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fa9fa188-919c-438b-be4f-34a22a00bea2) 2025-06-02 00:40:16.409546 | orchestrator | 2025-06-02 00:40:16.409984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:16.410850 | orchestrator | Monday 02 June 2025 00:40:16 +0000 (0:00:00.555) 0:00:04.438 *********** 2025-06-02 00:40:16.930722 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 00:40:16.931634 | orchestrator | 2025-06-02 00:40:16.933267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:16.934106 | orchestrator | Monday 02 June 2025 00:40:16 +0000 (0:00:00.524) 0:00:04.962 *********** 2025-06-02 00:40:17.301993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 00:40:17.303194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 00:40:17.304489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 00:40:17.305524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 00:40:17.306443 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 00:40:17.307356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 00:40:17.308129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 00:40:17.308796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 00:40:17.309281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 00:40:17.309907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 00:40:17.310487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 00:40:17.311138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 00:40:17.311584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 00:40:17.312057 | orchestrator | 2025-06-02 00:40:17.312771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:17.313122 | orchestrator | Monday 02 June 2025 00:40:17 +0000 (0:00:00.371) 0:00:05.334 *********** 2025-06-02 00:40:17.483675 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:17.484217 | orchestrator | 2025-06-02 00:40:17.485060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:17.485818 | orchestrator | Monday 02 June 2025 00:40:17 +0000 (0:00:00.178) 0:00:05.512 *********** 2025-06-02 00:40:17.651776 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:17.652233 | orchestrator | 2025-06-02 00:40:17.653230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:17.653839 | orchestrator | Monday 02 June 2025 00:40:17 +0000 (0:00:00.171) 0:00:05.683 *********** 2025-06-02 00:40:17.833779 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:17.834895 | orchestrator | 2025-06-02 00:40:17.835354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:17.836233 | orchestrator | Monday 02 June 2025 00:40:17 +0000 (0:00:00.181) 0:00:05.865 *********** 2025-06-02 00:40:18.030672 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:18.031204 | orchestrator | 2025-06-02 00:40:18.032040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:18.032691 | orchestrator | Monday 02 June 2025 00:40:18 +0000 (0:00:00.196) 0:00:06.062 *********** 2025-06-02 00:40:18.237060 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:18.237292 | orchestrator | 2025-06-02 00:40:18.237980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:18.238446 | orchestrator | Monday 02 June 2025 00:40:18 +0000 (0:00:00.206) 0:00:06.268 *********** 2025-06-02 00:40:18.421090 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:18.421916 | orchestrator | 2025-06-02 00:40:18.422695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:18.423506 | orchestrator | Monday 02 June 2025 00:40:18 +0000 (0:00:00.182) 0:00:06.451 *********** 2025-06-02 00:40:18.609744 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:18.610788 | orchestrator | 2025-06-02 00:40:18.611635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:18.612587 | orchestrator | Monday 02 June 2025 00:40:18 +0000 (0:00:00.190) 0:00:06.641 *********** 2025-06-02 00:40:18.802882 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:18.803734 | orchestrator | 2025-06-02 00:40:18.804149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:18.806180 | orchestrator | Monday 02 June 2025 00:40:18 +0000 (0:00:00.192) 0:00:06.834 *********** 2025-06-02 00:40:19.793820 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 00:40:19.794384 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 00:40:19.795370 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 00:40:19.795783 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 00:40:19.796250 | orchestrator | 2025-06-02 00:40:19.796539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:19.796822 | orchestrator | Monday 02 June 2025 00:40:19 +0000 (0:00:00.986) 0:00:07.820 *********** 2025-06-02 00:40:19.988805 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:19.989270 | orchestrator | 2025-06-02 00:40:19.990116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:19.993053 | orchestrator | Monday 02 June 2025 00:40:19 +0000 (0:00:00.199) 0:00:08.020 *********** 2025-06-02 00:40:20.174758 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:20.174858 | orchestrator | 2025-06-02 00:40:20.175597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:20.176229 | orchestrator | Monday 02 June 2025 00:40:20 +0000 (0:00:00.185) 0:00:08.205 *********** 2025-06-02 00:40:20.362173 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:20.362650 | orchestrator | 2025-06-02 00:40:20.363446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:20.364918 | orchestrator | Monday 02 June 2025 00:40:20 +0000 (0:00:00.187) 0:00:08.393 *********** 2025-06-02 00:40:20.548713 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:20.549281 | orchestrator | 2025-06-02 00:40:20.550704 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 00:40:20.550930 | orchestrator | Monday 02 June 2025 00:40:20 +0000 (0:00:00.185) 0:00:08.578 *********** 2025-06-02 00:40:20.676817 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:20.676930 | orchestrator | 2025-06-02 00:40:20.677030 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 00:40:20.677048 | orchestrator | Monday 02 June 2025 00:40:20 +0000 (0:00:00.127) 0:00:08.706 *********** 2025-06-02 00:40:20.855872 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3a2aacf8-31c8-546a-a559-f7f9618b27d4'}}) 2025-06-02 00:40:20.856463 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1905453d-e612-5c47-8424-6bc4888ba216'}}) 2025-06-02 00:40:20.857105 | orchestrator | 2025-06-02 00:40:20.857919 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 00:40:20.859063 | orchestrator | Monday 02 June 2025 00:40:20 +0000 (0:00:00.180) 0:00:08.887 *********** 2025-06-02 00:40:22.715583 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'}) 2025-06-02 00:40:22.715691 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'}) 2025-06-02 00:40:22.718136 | orchestrator | 2025-06-02 00:40:22.718600 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 00:40:22.719652 | orchestrator | Monday 02 June 2025 00:40:22 +0000 (0:00:01.857) 0:00:10.745 *********** 2025-06-02 00:40:22.866253 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:22.867573 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:22.869831 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:22.870404 | orchestrator | 2025-06-02 00:40:22.870938 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 00:40:22.871640 | orchestrator | Monday 02 June 2025 00:40:22 +0000 (0:00:00.152) 0:00:10.897 *********** 2025-06-02 00:40:24.252879 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'}) 2025-06-02 00:40:24.253788 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'}) 2025-06-02 00:40:24.255184 | orchestrator | 2025-06-02 00:40:24.255614 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 00:40:24.256967 | orchestrator | Monday 02 June 2025 00:40:24 +0000 (0:00:01.384) 0:00:12.282 *********** 2025-06-02 00:40:24.405883 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:24.406133 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:24.407097 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:24.408044 | orchestrator | 2025-06-02 00:40:24.409787 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 00:40:24.410534 | orchestrator | Monday 02 June 2025 00:40:24 +0000 (0:00:00.154) 0:00:12.436 *********** 2025-06-02 00:40:24.546276 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:24.546845 | orchestrator | 2025-06-02 00:40:24.547267 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 00:40:24.548405 | orchestrator | Monday 02 June 2025 00:40:24 +0000 (0:00:00.140) 0:00:12.577 *********** 2025-06-02 00:40:24.874914 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:24.876014 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:24.876930 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:24.878336 | orchestrator | 2025-06-02 00:40:24.879480 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 00:40:24.880369 | orchestrator | Monday 02 June 2025 00:40:24 +0000 (0:00:00.328) 0:00:12.905 *********** 2025-06-02 00:40:25.005531 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:25.005625 | orchestrator | 2025-06-02 00:40:25.006608 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 00:40:25.007911 | orchestrator | Monday 02 June 2025 00:40:24 +0000 (0:00:00.131) 0:00:13.037 *********** 2025-06-02 00:40:25.154365 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:25.154528 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:25.155689 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:25.156772 | orchestrator | 2025-06-02 00:40:25.157473 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 00:40:25.158241 | orchestrator | Monday 02 June 2025 00:40:25 +0000 (0:00:00.147) 0:00:13.184 *********** 2025-06-02 00:40:25.291607 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:25.293797 | orchestrator | 2025-06-02 00:40:25.294992 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 00:40:25.295074 | orchestrator | Monday 02 June 2025 00:40:25 +0000 (0:00:00.136) 0:00:13.321 *********** 2025-06-02 00:40:25.435789 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:25.435880 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:25.435991 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:25.436206 | orchestrator | 2025-06-02 00:40:25.436579 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 00:40:25.437611 | orchestrator | Monday 02 June 2025 00:40:25 +0000 (0:00:00.146) 0:00:13.467 *********** 2025-06-02 00:40:25.583674 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:40:25.583767 | orchestrator | 2025-06-02 00:40:25.585883 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 00:40:25.586311 | orchestrator | Monday 02 June 2025 00:40:25 +0000 (0:00:00.146) 0:00:13.613 *********** 2025-06-02 00:40:25.733530 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:25.736364 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:25.736402 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:25.736511 | orchestrator | 2025-06-02 00:40:25.738139 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 00:40:25.738174 | orchestrator | Monday 02 June 2025 00:40:25 +0000 (0:00:00.149) 0:00:13.763 *********** 2025-06-02 00:40:25.871311 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:25.871703 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:25.872331 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:25.872983 | orchestrator | 2025-06-02 00:40:25.873820 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 00:40:25.874228 | orchestrator | Monday 02 June 2025 00:40:25 +0000 (0:00:00.139) 0:00:13.902 *********** 2025-06-02 00:40:26.017379 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:26.017616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:26.018523 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:26.019211 | orchestrator | 2025-06-02 00:40:26.020412 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 00:40:26.020905 | orchestrator | Monday 02 June 2025 00:40:26 +0000 (0:00:00.147) 0:00:14.049 *********** 2025-06-02 00:40:26.156961 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:26.157164 | orchestrator | 2025-06-02 00:40:26.157239 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 00:40:26.157766 | orchestrator | Monday 02 June 2025 00:40:26 +0000 (0:00:00.139) 0:00:14.188 *********** 2025-06-02 00:40:26.305928 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:26.306574 | orchestrator | 2025-06-02 00:40:26.307326 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 00:40:26.308583 | orchestrator | Monday 02 June 2025 00:40:26 +0000 (0:00:00.148) 0:00:14.337 *********** 2025-06-02 00:40:26.439509 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:26.439604 | orchestrator | 2025-06-02 00:40:26.439934 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 00:40:26.440359 | orchestrator | Monday 02 June 2025 00:40:26 +0000 (0:00:00.134) 0:00:14.471 *********** 2025-06-02 00:40:26.806010 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 00:40:26.806652 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 00:40:26.806963 | orchestrator | } 2025-06-02 00:40:26.808618 | orchestrator | 2025-06-02 00:40:26.809309 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 00:40:26.810116 | orchestrator | Monday 02 June 2025 00:40:26 +0000 (0:00:00.363) 0:00:14.835 *********** 2025-06-02 00:40:26.939311 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 00:40:26.940016 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 00:40:26.941577 | orchestrator | } 2025-06-02 00:40:26.942336 | orchestrator | 2025-06-02 00:40:26.943055 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 00:40:26.944020 | orchestrator | Monday 02 June 2025 00:40:26 +0000 (0:00:00.136) 0:00:14.971 *********** 2025-06-02 00:40:27.068801 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 00:40:27.069334 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 00:40:27.069897 | orchestrator | } 2025-06-02 00:40:27.070847 | orchestrator | 2025-06-02 00:40:27.071497 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 00:40:27.071902 | orchestrator | Monday 02 June 2025 00:40:27 +0000 (0:00:00.126) 0:00:15.097 *********** 2025-06-02 00:40:27.727737 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:40:27.727837 | orchestrator | 2025-06-02 00:40:27.727966 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 00:40:27.729011 | orchestrator | Monday 02 June 2025 00:40:27 +0000 (0:00:00.660) 0:00:15.758 *********** 2025-06-02 00:40:28.223995 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:40:28.224167 | orchestrator | 2025-06-02 00:40:28.224975 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 00:40:28.225823 | orchestrator | Monday 02 June 2025 00:40:28 +0000 (0:00:00.494) 0:00:16.252 *********** 2025-06-02 00:40:28.723203 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:40:28.724011 | orchestrator | 2025-06-02 00:40:28.724650 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 00:40:28.725539 | orchestrator | Monday 02 June 2025 00:40:28 +0000 (0:00:00.499) 0:00:16.752 *********** 2025-06-02 00:40:28.854832 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:40:28.855658 | orchestrator | 2025-06-02 00:40:28.856672 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 00:40:28.858157 | orchestrator | Monday 02 June 2025 00:40:28 +0000 (0:00:00.134) 0:00:16.886 *********** 2025-06-02 00:40:28.965463 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:28.965554 | orchestrator | 2025-06-02 00:40:28.965570 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 00:40:28.965583 | orchestrator | Monday 02 June 2025 00:40:28 +0000 (0:00:00.106) 0:00:16.993 *********** 2025-06-02 00:40:29.070562 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:29.071725 | orchestrator | 2025-06-02 00:40:29.073243 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 00:40:29.073565 | orchestrator | Monday 02 June 2025 00:40:29 +0000 (0:00:00.108) 0:00:17.101 *********** 2025-06-02 00:40:29.202377 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 00:40:29.203394 | orchestrator |  "vgs_report": { 2025-06-02 00:40:29.204099 | orchestrator |  "vg": [] 2025-06-02 00:40:29.205295 | orchestrator |  } 2025-06-02 00:40:29.205990 | orchestrator | } 2025-06-02 00:40:29.206713 | orchestrator | 2025-06-02 00:40:29.207528 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 00:40:29.208004 | orchestrator | Monday 02 June 2025 00:40:29 +0000 (0:00:00.132) 0:00:17.234 *********** 2025-06-02 00:40:29.335302 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:29.336215 | orchestrator | 2025-06-02 00:40:29.336659 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 00:40:29.337341 | orchestrator | Monday 02 June 2025 00:40:29 +0000 (0:00:00.132) 0:00:17.367 *********** 2025-06-02 00:40:29.480079 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:29.480165 | orchestrator | 2025-06-02 00:40:29.480843 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 00:40:29.481600 | orchestrator | Monday 02 June 2025 00:40:29 +0000 (0:00:00.143) 0:00:17.511 *********** 2025-06-02 00:40:29.803846 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:29.803929 | orchestrator | 2025-06-02 00:40:29.804101 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 00:40:29.804387 | orchestrator | Monday 02 June 2025 00:40:29 +0000 (0:00:00.320) 0:00:17.831 *********** 2025-06-02 00:40:29.931387 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:29.931903 | orchestrator | 2025-06-02 00:40:29.932915 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 00:40:29.933588 | orchestrator | Monday 02 June 2025 00:40:29 +0000 (0:00:00.130) 0:00:17.961 *********** 2025-06-02 00:40:30.085254 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:30.085381 | orchestrator | 2025-06-02 00:40:30.085850 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 00:40:30.086696 | orchestrator | Monday 02 June 2025 00:40:30 +0000 (0:00:00.152) 0:00:18.113 *********** 2025-06-02 00:40:30.223926 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:30.224464 | orchestrator | 2025-06-02 00:40:30.224869 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 00:40:30.225366 | orchestrator | Monday 02 June 2025 00:40:30 +0000 (0:00:00.141) 0:00:18.255 *********** 2025-06-02 00:40:30.352852 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:30.353085 | orchestrator | 2025-06-02 00:40:30.353746 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 00:40:30.354379 | orchestrator | Monday 02 June 2025 00:40:30 +0000 (0:00:00.129) 0:00:18.384 *********** 2025-06-02 00:40:30.491756 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:30.492622 | orchestrator | 2025-06-02 00:40:30.493736 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 00:40:30.494525 | orchestrator | Monday 02 June 2025 00:40:30 +0000 (0:00:00.137) 0:00:18.522 *********** 2025-06-02 00:40:30.630996 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:30.631377 | orchestrator | 2025-06-02 00:40:30.633235 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 00:40:30.633800 | orchestrator | Monday 02 June 2025 00:40:30 +0000 (0:00:00.140) 0:00:18.662 *********** 2025-06-02 00:40:30.769579 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:30.769815 | orchestrator | 2025-06-02 00:40:30.770692 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 00:40:30.771101 | orchestrator | Monday 02 June 2025 00:40:30 +0000 (0:00:00.138) 0:00:18.801 *********** 2025-06-02 00:40:30.912565 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:30.913525 | orchestrator | 2025-06-02 00:40:30.913905 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 00:40:30.914324 | orchestrator | Monday 02 June 2025 00:40:30 +0000 (0:00:00.139) 0:00:18.941 *********** 2025-06-02 00:40:31.051988 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:31.052523 | orchestrator | 2025-06-02 00:40:31.053712 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 00:40:31.054413 | orchestrator | Monday 02 June 2025 00:40:31 +0000 (0:00:00.142) 0:00:19.083 *********** 2025-06-02 00:40:31.191416 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:31.191675 | orchestrator | 2025-06-02 00:40:31.192299 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 00:40:31.192789 | orchestrator | Monday 02 June 2025 00:40:31 +0000 (0:00:00.137) 0:00:19.221 *********** 2025-06-02 00:40:31.321265 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:31.323773 | orchestrator | 2025-06-02 00:40:31.324343 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 00:40:31.325625 | orchestrator | Monday 02 June 2025 00:40:31 +0000 (0:00:00.131) 0:00:19.352 *********** 2025-06-02 00:40:31.478894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:31.479080 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:31.479945 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:31.480563 | orchestrator | 2025-06-02 00:40:31.481330 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 00:40:31.482187 | orchestrator | Monday 02 June 2025 00:40:31 +0000 (0:00:00.155) 0:00:19.509 *********** 2025-06-02 00:40:31.841040 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:31.843796 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:31.844519 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:31.845303 | orchestrator | 2025-06-02 00:40:31.846011 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 00:40:31.846771 | orchestrator | Monday 02 June 2025 00:40:31 +0000 (0:00:00.359) 0:00:19.868 *********** 2025-06-02 00:40:32.008660 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:32.009556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:32.010246 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:32.011018 | orchestrator | 2025-06-02 00:40:32.011924 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 00:40:32.012566 | orchestrator | Monday 02 June 2025 00:40:31 +0000 (0:00:00.170) 0:00:20.039 *********** 2025-06-02 00:40:32.159616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:32.160227 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:32.161308 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:32.162182 | orchestrator | 2025-06-02 00:40:32.162949 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 00:40:32.163839 | orchestrator | Monday 02 June 2025 00:40:32 +0000 (0:00:00.151) 0:00:20.191 *********** 2025-06-02 00:40:32.315923 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:32.317344 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:32.317933 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:32.318879 | orchestrator | 2025-06-02 00:40:32.320810 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 00:40:32.321789 | orchestrator | Monday 02 June 2025 00:40:32 +0000 (0:00:00.155) 0:00:20.346 *********** 2025-06-02 00:40:32.473067 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:32.474063 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:32.474101 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:32.474497 | orchestrator | 2025-06-02 00:40:32.475026 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 00:40:32.475587 | orchestrator | Monday 02 June 2025 00:40:32 +0000 (0:00:00.156) 0:00:20.502 *********** 2025-06-02 00:40:32.627622 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:32.628070 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:32.629508 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:32.629978 | orchestrator | 2025-06-02 00:40:32.630907 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 00:40:32.631968 | orchestrator | Monday 02 June 2025 00:40:32 +0000 (0:00:00.156) 0:00:20.658 *********** 2025-06-02 00:40:32.783870 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:32.784446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:32.785498 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:32.786744 | orchestrator | 2025-06-02 00:40:32.788025 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 00:40:32.788983 | orchestrator | Monday 02 June 2025 00:40:32 +0000 (0:00:00.155) 0:00:20.814 *********** 2025-06-02 00:40:33.307024 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:40:33.307129 | orchestrator | 2025-06-02 00:40:33.308160 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 00:40:33.309017 | orchestrator | Monday 02 June 2025 00:40:33 +0000 (0:00:00.522) 0:00:21.336 *********** 2025-06-02 00:40:33.809414 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:40:33.809707 | orchestrator | 2025-06-02 00:40:33.810396 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 00:40:33.811071 | orchestrator | Monday 02 June 2025 00:40:33 +0000 (0:00:00.503) 0:00:21.840 *********** 2025-06-02 00:40:33.962817 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:40:33.963199 | orchestrator | 2025-06-02 00:40:33.963862 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 00:40:33.965835 | orchestrator | Monday 02 June 2025 00:40:33 +0000 (0:00:00.151) 0:00:21.991 *********** 2025-06-02 00:40:34.150766 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'vg_name': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'}) 2025-06-02 00:40:34.150998 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'vg_name': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'}) 2025-06-02 00:40:34.151716 | orchestrator | 2025-06-02 00:40:34.152195 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 00:40:34.152891 | orchestrator | Monday 02 June 2025 00:40:34 +0000 (0:00:00.189) 0:00:22.181 *********** 2025-06-02 00:40:34.319283 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:34.320073 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:34.320875 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:34.321705 | orchestrator | 2025-06-02 00:40:34.322683 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 00:40:34.323665 | orchestrator | Monday 02 June 2025 00:40:34 +0000 (0:00:00.167) 0:00:22.348 *********** 2025-06-02 00:40:34.661547 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:34.662112 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:34.663414 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:34.664635 | orchestrator | 2025-06-02 00:40:34.664914 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 00:40:34.665965 | orchestrator | Monday 02 June 2025 00:40:34 +0000 (0:00:00.341) 0:00:22.690 *********** 2025-06-02 00:40:34.826539 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'})  2025-06-02 00:40:34.827637 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'})  2025-06-02 00:40:34.828477 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:40:34.832349 | orchestrator | 2025-06-02 00:40:34.832377 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 00:40:34.832392 | orchestrator | Monday 02 June 2025 00:40:34 +0000 (0:00:00.167) 0:00:22.857 *********** 2025-06-02 00:40:35.112871 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 00:40:35.114323 | orchestrator |  "lvm_report": { 2025-06-02 00:40:35.114361 | orchestrator |  "lv": [ 2025-06-02 00:40:35.114987 | orchestrator |  { 2025-06-02 00:40:35.115817 | orchestrator |  "lv_name": "osd-block-1905453d-e612-5c47-8424-6bc4888ba216", 2025-06-02 00:40:35.116409 | orchestrator |  "vg_name": "ceph-1905453d-e612-5c47-8424-6bc4888ba216" 2025-06-02 00:40:35.116984 | orchestrator |  }, 2025-06-02 00:40:35.117619 | orchestrator |  { 2025-06-02 00:40:35.118176 | orchestrator |  "lv_name": "osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4", 2025-06-02 00:40:35.118754 | orchestrator |  "vg_name": "ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4" 2025-06-02 00:40:35.119307 | orchestrator |  } 2025-06-02 00:40:35.119905 | orchestrator |  ], 2025-06-02 00:40:35.120617 | orchestrator |  "pv": [ 2025-06-02 00:40:35.121402 | orchestrator |  { 2025-06-02 00:40:35.121643 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 00:40:35.122083 | orchestrator |  "vg_name": "ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4" 2025-06-02 00:40:35.122443 | orchestrator |  }, 2025-06-02 00:40:35.122804 | orchestrator |  { 2025-06-02 00:40:35.123190 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 00:40:35.123594 | orchestrator |  "vg_name": "ceph-1905453d-e612-5c47-8424-6bc4888ba216" 2025-06-02 00:40:35.123984 | orchestrator |  } 2025-06-02 00:40:35.124342 | orchestrator |  ] 2025-06-02 00:40:35.124753 | orchestrator |  } 2025-06-02 00:40:35.125199 | orchestrator | } 2025-06-02 00:40:35.125586 | orchestrator | 2025-06-02 00:40:35.126101 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 00:40:35.126306 | orchestrator | 2025-06-02 00:40:35.126726 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 00:40:35.127068 | orchestrator | Monday 02 June 2025 00:40:35 +0000 (0:00:00.283) 0:00:23.141 *********** 2025-06-02 00:40:35.344502 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 00:40:35.345127 | orchestrator | 2025-06-02 00:40:35.346624 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 00:40:35.347559 | orchestrator | Monday 02 June 2025 00:40:35 +0000 (0:00:00.233) 0:00:23.375 *********** 2025-06-02 00:40:35.568563 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:40:35.568812 | orchestrator | 2025-06-02 00:40:35.568930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:35.569610 | orchestrator | Monday 02 June 2025 00:40:35 +0000 (0:00:00.224) 0:00:23.600 *********** 2025-06-02 00:40:35.966629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 00:40:35.968383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 00:40:35.969810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 00:40:35.971939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 00:40:35.971965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 00:40:35.972859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 00:40:35.973674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 00:40:35.974123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 00:40:35.974627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 00:40:35.975249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 00:40:35.975572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 00:40:35.976063 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 00:40:35.977474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 00:40:35.977746 | orchestrator | 2025-06-02 00:40:35.978121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:35.978537 | orchestrator | Monday 02 June 2025 00:40:35 +0000 (0:00:00.397) 0:00:23.998 *********** 2025-06-02 00:40:36.166750 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:36.167241 | orchestrator | 2025-06-02 00:40:36.167328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:36.167981 | orchestrator | Monday 02 June 2025 00:40:36 +0000 (0:00:00.199) 0:00:24.197 *********** 2025-06-02 00:40:36.353005 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:36.356926 | orchestrator | 2025-06-02 00:40:36.356968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:36.357631 | orchestrator | Monday 02 June 2025 00:40:36 +0000 (0:00:00.187) 0:00:24.384 *********** 2025-06-02 00:40:36.555048 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:36.555507 | orchestrator | 2025-06-02 00:40:36.555585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:36.556215 | orchestrator | Monday 02 June 2025 00:40:36 +0000 (0:00:00.201) 0:00:24.585 *********** 2025-06-02 00:40:37.139845 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:37.140602 | orchestrator | 2025-06-02 00:40:37.141380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:37.142399 | orchestrator | Monday 02 June 2025 00:40:37 +0000 (0:00:00.582) 0:00:25.168 *********** 2025-06-02 00:40:37.340047 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:37.341156 | orchestrator | 2025-06-02 00:40:37.341843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:37.342635 | orchestrator | Monday 02 June 2025 00:40:37 +0000 (0:00:00.202) 0:00:25.371 *********** 2025-06-02 00:40:37.547565 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:37.547858 | orchestrator | 2025-06-02 00:40:37.548761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:37.550139 | orchestrator | Monday 02 June 2025 00:40:37 +0000 (0:00:00.207) 0:00:25.578 *********** 2025-06-02 00:40:37.755840 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:37.755999 | orchestrator | 2025-06-02 00:40:37.756826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:37.757399 | orchestrator | Monday 02 June 2025 00:40:37 +0000 (0:00:00.208) 0:00:25.787 *********** 2025-06-02 00:40:37.954811 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:37.955378 | orchestrator | 2025-06-02 00:40:37.956842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:37.957513 | orchestrator | Monday 02 June 2025 00:40:37 +0000 (0:00:00.197) 0:00:25.985 *********** 2025-06-02 00:40:38.356175 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04) 2025-06-02 00:40:38.359542 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04) 2025-06-02 00:40:38.360811 | orchestrator | 2025-06-02 00:40:38.361749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:38.362462 | orchestrator | Monday 02 June 2025 00:40:38 +0000 (0:00:00.400) 0:00:26.385 *********** 2025-06-02 00:40:38.766851 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ba4d1aaf-78c8-4549-a686-67bb8e50d69d) 2025-06-02 00:40:38.767015 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ba4d1aaf-78c8-4549-a686-67bb8e50d69d) 2025-06-02 00:40:38.768007 | orchestrator | 2025-06-02 00:40:38.768791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:38.769468 | orchestrator | Monday 02 June 2025 00:40:38 +0000 (0:00:00.412) 0:00:26.797 *********** 2025-06-02 00:40:39.175549 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4ace89ec-5901-4181-9aea-4e5d559a0cfd) 2025-06-02 00:40:39.177192 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4ace89ec-5901-4181-9aea-4e5d559a0cfd) 2025-06-02 00:40:39.177740 | orchestrator | 2025-06-02 00:40:39.178589 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:39.179533 | orchestrator | Monday 02 June 2025 00:40:39 +0000 (0:00:00.408) 0:00:27.206 *********** 2025-06-02 00:40:39.575982 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bbd34322-9953-4267-815d-84376d8605a5) 2025-06-02 00:40:39.577098 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bbd34322-9953-4267-815d-84376d8605a5) 2025-06-02 00:40:39.578506 | orchestrator | 2025-06-02 00:40:39.579342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:39.580412 | orchestrator | Monday 02 June 2025 00:40:39 +0000 (0:00:00.400) 0:00:27.606 *********** 2025-06-02 00:40:39.913569 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 00:40:39.914472 | orchestrator | 2025-06-02 00:40:39.914949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:39.916273 | orchestrator | Monday 02 June 2025 00:40:39 +0000 (0:00:00.334) 0:00:27.940 *********** 2025-06-02 00:40:40.567586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 00:40:40.568673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 00:40:40.568725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 00:40:40.569666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 00:40:40.570403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 00:40:40.572813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 00:40:40.572854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 00:40:40.573774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 00:40:40.574477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 00:40:40.575239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 00:40:40.575934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 00:40:40.576603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 00:40:40.577378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 00:40:40.577905 | orchestrator | 2025-06-02 00:40:40.578890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:40.579973 | orchestrator | Monday 02 June 2025 00:40:40 +0000 (0:00:00.657) 0:00:28.597 *********** 2025-06-02 00:40:40.814076 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:40.814572 | orchestrator | 2025-06-02 00:40:40.815594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:40.816723 | orchestrator | Monday 02 June 2025 00:40:40 +0000 (0:00:00.247) 0:00:28.845 *********** 2025-06-02 00:40:41.021667 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:41.022475 | orchestrator | 2025-06-02 00:40:41.023484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:41.024514 | orchestrator | Monday 02 June 2025 00:40:41 +0000 (0:00:00.207) 0:00:29.053 *********** 2025-06-02 00:40:41.210372 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:41.210556 | orchestrator | 2025-06-02 00:40:41.210576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:41.210657 | orchestrator | Monday 02 June 2025 00:40:41 +0000 (0:00:00.187) 0:00:29.240 *********** 2025-06-02 00:40:41.415522 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:41.415622 | orchestrator | 2025-06-02 00:40:41.417169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:41.418179 | orchestrator | Monday 02 June 2025 00:40:41 +0000 (0:00:00.205) 0:00:29.445 *********** 2025-06-02 00:40:41.639904 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:41.640103 | orchestrator | 2025-06-02 00:40:41.641337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:41.642320 | orchestrator | Monday 02 June 2025 00:40:41 +0000 (0:00:00.223) 0:00:29.668 *********** 2025-06-02 00:40:41.851908 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:41.852945 | orchestrator | 2025-06-02 00:40:41.853245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:41.854917 | orchestrator | Monday 02 June 2025 00:40:41 +0000 (0:00:00.214) 0:00:29.883 *********** 2025-06-02 00:40:42.077335 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:42.077843 | orchestrator | 2025-06-02 00:40:42.078389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:42.079075 | orchestrator | Monday 02 June 2025 00:40:42 +0000 (0:00:00.224) 0:00:30.108 *********** 2025-06-02 00:40:42.276985 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:42.277140 | orchestrator | 2025-06-02 00:40:42.278085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:42.278860 | orchestrator | Monday 02 June 2025 00:40:42 +0000 (0:00:00.198) 0:00:30.307 *********** 2025-06-02 00:40:43.131474 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 00:40:43.131695 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 00:40:43.132656 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 00:40:43.134136 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 00:40:43.134996 | orchestrator | 2025-06-02 00:40:43.138691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:43.139452 | orchestrator | Monday 02 June 2025 00:40:43 +0000 (0:00:00.855) 0:00:31.162 *********** 2025-06-02 00:40:43.326959 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:43.327125 | orchestrator | 2025-06-02 00:40:43.327728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:43.328523 | orchestrator | Monday 02 June 2025 00:40:43 +0000 (0:00:00.195) 0:00:31.358 *********** 2025-06-02 00:40:43.508642 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:43.509451 | orchestrator | 2025-06-02 00:40:43.510816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:43.511956 | orchestrator | Monday 02 June 2025 00:40:43 +0000 (0:00:00.180) 0:00:31.539 *********** 2025-06-02 00:40:44.117115 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:44.117833 | orchestrator | 2025-06-02 00:40:44.118681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:40:44.119364 | orchestrator | Monday 02 June 2025 00:40:44 +0000 (0:00:00.609) 0:00:32.148 *********** 2025-06-02 00:40:44.321885 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:44.322225 | orchestrator | 2025-06-02 00:40:44.322908 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 00:40:44.324585 | orchestrator | Monday 02 June 2025 00:40:44 +0000 (0:00:00.203) 0:00:32.352 *********** 2025-06-02 00:40:44.456910 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:44.457369 | orchestrator | 2025-06-02 00:40:44.458486 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 00:40:44.459582 | orchestrator | Monday 02 June 2025 00:40:44 +0000 (0:00:00.135) 0:00:32.488 *********** 2025-06-02 00:40:44.672015 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89fe9f69-ec16-58f3-8212-bc080cf4c28c'}}) 2025-06-02 00:40:44.672150 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3a308c11-b64c-503e-b49b-4b3a12050ecf'}}) 2025-06-02 00:40:44.672697 | orchestrator | 2025-06-02 00:40:44.673569 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 00:40:44.674521 | orchestrator | Monday 02 June 2025 00:40:44 +0000 (0:00:00.213) 0:00:32.702 *********** 2025-06-02 00:40:46.523520 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'}) 2025-06-02 00:40:46.524075 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'}) 2025-06-02 00:40:46.524283 | orchestrator | 2025-06-02 00:40:46.526099 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 00:40:46.526811 | orchestrator | Monday 02 June 2025 00:40:46 +0000 (0:00:01.850) 0:00:34.552 *********** 2025-06-02 00:40:46.671943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:46.672930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:46.673367 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:46.674632 | orchestrator | 2025-06-02 00:40:46.675582 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 00:40:46.676167 | orchestrator | Monday 02 June 2025 00:40:46 +0000 (0:00:00.150) 0:00:34.703 *********** 2025-06-02 00:40:47.946190 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'}) 2025-06-02 00:40:47.946508 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'}) 2025-06-02 00:40:47.947281 | orchestrator | 2025-06-02 00:40:47.947979 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 00:40:47.948993 | orchestrator | Monday 02 June 2025 00:40:47 +0000 (0:00:01.271) 0:00:35.975 *********** 2025-06-02 00:40:48.094129 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:48.094230 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:48.094246 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:48.095816 | orchestrator | 2025-06-02 00:40:48.095844 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 00:40:48.096603 | orchestrator | Monday 02 June 2025 00:40:48 +0000 (0:00:00.147) 0:00:36.122 *********** 2025-06-02 00:40:48.221950 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:48.222108 | orchestrator | 2025-06-02 00:40:48.222348 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 00:40:48.222645 | orchestrator | Monday 02 June 2025 00:40:48 +0000 (0:00:00.130) 0:00:36.253 *********** 2025-06-02 00:40:48.365506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:48.366164 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:48.367098 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:48.367834 | orchestrator | 2025-06-02 00:40:48.368633 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 00:40:48.369407 | orchestrator | Monday 02 June 2025 00:40:48 +0000 (0:00:00.144) 0:00:36.397 *********** 2025-06-02 00:40:48.497310 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:48.497790 | orchestrator | 2025-06-02 00:40:48.498466 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 00:40:48.499091 | orchestrator | Monday 02 June 2025 00:40:48 +0000 (0:00:00.131) 0:00:36.529 *********** 2025-06-02 00:40:48.644837 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:48.645004 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:48.645532 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:48.646383 | orchestrator | 2025-06-02 00:40:48.648339 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 00:40:48.650271 | orchestrator | Monday 02 June 2025 00:40:48 +0000 (0:00:00.146) 0:00:36.675 *********** 2025-06-02 00:40:48.944070 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:48.944985 | orchestrator | 2025-06-02 00:40:48.946234 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 00:40:48.946944 | orchestrator | Monday 02 June 2025 00:40:48 +0000 (0:00:00.300) 0:00:36.976 *********** 2025-06-02 00:40:49.091353 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:49.092048 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:49.092678 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:49.093300 | orchestrator | 2025-06-02 00:40:49.094123 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 00:40:49.095832 | orchestrator | Monday 02 June 2025 00:40:49 +0000 (0:00:00.147) 0:00:37.123 *********** 2025-06-02 00:40:49.221699 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:40:49.222580 | orchestrator | 2025-06-02 00:40:49.222921 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 00:40:49.224185 | orchestrator | Monday 02 June 2025 00:40:49 +0000 (0:00:00.130) 0:00:37.253 *********** 2025-06-02 00:40:49.358089 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:49.358341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:49.359779 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:49.360837 | orchestrator | 2025-06-02 00:40:49.362194 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 00:40:49.362895 | orchestrator | Monday 02 June 2025 00:40:49 +0000 (0:00:00.135) 0:00:37.389 *********** 2025-06-02 00:40:49.500002 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:49.500844 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:49.501851 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:49.502781 | orchestrator | 2025-06-02 00:40:49.503740 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 00:40:49.504990 | orchestrator | Monday 02 June 2025 00:40:49 +0000 (0:00:00.141) 0:00:37.531 *********** 2025-06-02 00:40:49.647790 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:49.647973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:49.649534 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:49.649752 | orchestrator | 2025-06-02 00:40:49.650711 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 00:40:49.651369 | orchestrator | Monday 02 June 2025 00:40:49 +0000 (0:00:00.146) 0:00:37.677 *********** 2025-06-02 00:40:49.788569 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:49.790340 | orchestrator | 2025-06-02 00:40:49.792154 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 00:40:49.792908 | orchestrator | Monday 02 June 2025 00:40:49 +0000 (0:00:00.140) 0:00:37.818 *********** 2025-06-02 00:40:49.904495 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:49.904584 | orchestrator | 2025-06-02 00:40:49.906193 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 00:40:49.909306 | orchestrator | Monday 02 June 2025 00:40:49 +0000 (0:00:00.117) 0:00:37.935 *********** 2025-06-02 00:40:50.029569 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:50.030251 | orchestrator | 2025-06-02 00:40:50.031598 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 00:40:50.035750 | orchestrator | Monday 02 June 2025 00:40:50 +0000 (0:00:00.125) 0:00:38.061 *********** 2025-06-02 00:40:50.166962 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 00:40:50.168124 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 00:40:50.169559 | orchestrator | } 2025-06-02 00:40:50.170513 | orchestrator | 2025-06-02 00:40:50.171278 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 00:40:50.172239 | orchestrator | Monday 02 June 2025 00:40:50 +0000 (0:00:00.137) 0:00:38.198 *********** 2025-06-02 00:40:50.306994 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 00:40:50.308429 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 00:40:50.310105 | orchestrator | } 2025-06-02 00:40:50.310729 | orchestrator | 2025-06-02 00:40:50.311865 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 00:40:50.312951 | orchestrator | Monday 02 June 2025 00:40:50 +0000 (0:00:00.140) 0:00:38.338 *********** 2025-06-02 00:40:50.434844 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 00:40:50.435541 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 00:40:50.436847 | orchestrator | } 2025-06-02 00:40:50.436878 | orchestrator | 2025-06-02 00:40:50.437683 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 00:40:50.438621 | orchestrator | Monday 02 June 2025 00:40:50 +0000 (0:00:00.128) 0:00:38.467 *********** 2025-06-02 00:40:51.132855 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:40:51.133529 | orchestrator | 2025-06-02 00:40:51.135304 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 00:40:51.135337 | orchestrator | Monday 02 June 2025 00:40:51 +0000 (0:00:00.695) 0:00:39.162 *********** 2025-06-02 00:40:51.645185 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:40:51.646218 | orchestrator | 2025-06-02 00:40:51.647611 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 00:40:51.649249 | orchestrator | Monday 02 June 2025 00:40:51 +0000 (0:00:00.512) 0:00:39.675 *********** 2025-06-02 00:40:52.141496 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:40:52.142552 | orchestrator | 2025-06-02 00:40:52.143317 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 00:40:52.144243 | orchestrator | Monday 02 June 2025 00:40:52 +0000 (0:00:00.497) 0:00:40.173 *********** 2025-06-02 00:40:52.284929 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:40:52.285646 | orchestrator | 2025-06-02 00:40:52.286553 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 00:40:52.287457 | orchestrator | Monday 02 June 2025 00:40:52 +0000 (0:00:00.143) 0:00:40.316 *********** 2025-06-02 00:40:52.393630 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:52.393720 | orchestrator | 2025-06-02 00:40:52.394922 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 00:40:52.395856 | orchestrator | Monday 02 June 2025 00:40:52 +0000 (0:00:00.107) 0:00:40.423 *********** 2025-06-02 00:40:52.498926 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:52.499736 | orchestrator | 2025-06-02 00:40:52.500758 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 00:40:52.502096 | orchestrator | Monday 02 June 2025 00:40:52 +0000 (0:00:00.106) 0:00:40.530 *********** 2025-06-02 00:40:52.656096 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 00:40:52.656847 | orchestrator |  "vgs_report": { 2025-06-02 00:40:52.657862 | orchestrator |  "vg": [] 2025-06-02 00:40:52.659638 | orchestrator |  } 2025-06-02 00:40:52.660024 | orchestrator | } 2025-06-02 00:40:52.660778 | orchestrator | 2025-06-02 00:40:52.661223 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 00:40:52.662685 | orchestrator | Monday 02 June 2025 00:40:52 +0000 (0:00:00.156) 0:00:40.686 *********** 2025-06-02 00:40:52.795430 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:52.795568 | orchestrator | 2025-06-02 00:40:52.796273 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 00:40:52.796915 | orchestrator | Monday 02 June 2025 00:40:52 +0000 (0:00:00.139) 0:00:40.826 *********** 2025-06-02 00:40:52.919664 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:52.919984 | orchestrator | 2025-06-02 00:40:52.920696 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 00:40:52.922804 | orchestrator | Monday 02 June 2025 00:40:52 +0000 (0:00:00.122) 0:00:40.949 *********** 2025-06-02 00:40:53.038309 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:53.038897 | orchestrator | 2025-06-02 00:40:53.040748 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 00:40:53.041329 | orchestrator | Monday 02 June 2025 00:40:53 +0000 (0:00:00.120) 0:00:41.069 *********** 2025-06-02 00:40:53.171535 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:53.171741 | orchestrator | 2025-06-02 00:40:53.172345 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 00:40:53.173606 | orchestrator | Monday 02 June 2025 00:40:53 +0000 (0:00:00.133) 0:00:41.203 *********** 2025-06-02 00:40:53.298869 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:53.299660 | orchestrator | 2025-06-02 00:40:53.300557 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 00:40:53.301247 | orchestrator | Monday 02 June 2025 00:40:53 +0000 (0:00:00.127) 0:00:41.330 *********** 2025-06-02 00:40:53.597490 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:53.598263 | orchestrator | 2025-06-02 00:40:53.599103 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 00:40:53.600231 | orchestrator | Monday 02 June 2025 00:40:53 +0000 (0:00:00.298) 0:00:41.629 *********** 2025-06-02 00:40:53.734191 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:53.736787 | orchestrator | 2025-06-02 00:40:53.737600 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 00:40:53.738645 | orchestrator | Monday 02 June 2025 00:40:53 +0000 (0:00:00.136) 0:00:41.765 *********** 2025-06-02 00:40:53.859834 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:53.859979 | orchestrator | 2025-06-02 00:40:53.860936 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 00:40:53.861789 | orchestrator | Monday 02 June 2025 00:40:53 +0000 (0:00:00.126) 0:00:41.891 *********** 2025-06-02 00:40:53.981578 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:53.981904 | orchestrator | 2025-06-02 00:40:53.983271 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 00:40:53.984291 | orchestrator | Monday 02 June 2025 00:40:53 +0000 (0:00:00.120) 0:00:42.011 *********** 2025-06-02 00:40:54.120189 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:54.121346 | orchestrator | 2025-06-02 00:40:54.122967 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 00:40:54.123696 | orchestrator | Monday 02 June 2025 00:40:54 +0000 (0:00:00.139) 0:00:42.150 *********** 2025-06-02 00:40:54.250821 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:54.252233 | orchestrator | 2025-06-02 00:40:54.252925 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 00:40:54.254764 | orchestrator | Monday 02 June 2025 00:40:54 +0000 (0:00:00.131) 0:00:42.282 *********** 2025-06-02 00:40:54.375566 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:54.376476 | orchestrator | 2025-06-02 00:40:54.376670 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 00:40:54.378123 | orchestrator | Monday 02 June 2025 00:40:54 +0000 (0:00:00.125) 0:00:42.407 *********** 2025-06-02 00:40:54.509006 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:54.509889 | orchestrator | 2025-06-02 00:40:54.511014 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 00:40:54.511862 | orchestrator | Monday 02 June 2025 00:40:54 +0000 (0:00:00.132) 0:00:42.540 *********** 2025-06-02 00:40:54.646723 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:54.647838 | orchestrator | 2025-06-02 00:40:54.648585 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 00:40:54.649849 | orchestrator | Monday 02 June 2025 00:40:54 +0000 (0:00:00.138) 0:00:42.678 *********** 2025-06-02 00:40:54.802972 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:54.804426 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:54.805241 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:54.806343 | orchestrator | 2025-06-02 00:40:54.806986 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 00:40:54.807907 | orchestrator | Monday 02 June 2025 00:40:54 +0000 (0:00:00.155) 0:00:42.833 *********** 2025-06-02 00:40:54.942314 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:54.943954 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:54.945057 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:54.946197 | orchestrator | 2025-06-02 00:40:54.947393 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 00:40:54.948406 | orchestrator | Monday 02 June 2025 00:40:54 +0000 (0:00:00.138) 0:00:42.972 *********** 2025-06-02 00:40:55.090412 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:55.090791 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:55.091689 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:55.092308 | orchestrator | 2025-06-02 00:40:55.093206 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 00:40:55.093702 | orchestrator | Monday 02 June 2025 00:40:55 +0000 (0:00:00.150) 0:00:43.122 *********** 2025-06-02 00:40:55.404633 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:55.404981 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:55.406387 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:55.407335 | orchestrator | 2025-06-02 00:40:55.408412 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 00:40:55.409154 | orchestrator | Monday 02 June 2025 00:40:55 +0000 (0:00:00.314) 0:00:43.436 *********** 2025-06-02 00:40:55.561409 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:55.562728 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:55.563494 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:55.564542 | orchestrator | 2025-06-02 00:40:55.564904 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 00:40:55.566378 | orchestrator | Monday 02 June 2025 00:40:55 +0000 (0:00:00.157) 0:00:43.593 *********** 2025-06-02 00:40:55.699624 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:55.699707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:55.701268 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:55.702006 | orchestrator | 2025-06-02 00:40:55.702900 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 00:40:55.703615 | orchestrator | Monday 02 June 2025 00:40:55 +0000 (0:00:00.135) 0:00:43.729 *********** 2025-06-02 00:40:55.847812 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:55.847900 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:55.848999 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:55.849744 | orchestrator | 2025-06-02 00:40:55.850938 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 00:40:55.851110 | orchestrator | Monday 02 June 2025 00:40:55 +0000 (0:00:00.149) 0:00:43.879 *********** 2025-06-02 00:40:55.991943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:55.992630 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:55.993334 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:55.994329 | orchestrator | 2025-06-02 00:40:55.995484 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 00:40:55.996602 | orchestrator | Monday 02 June 2025 00:40:55 +0000 (0:00:00.144) 0:00:44.023 *********** 2025-06-02 00:40:56.489797 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:40:56.490571 | orchestrator | 2025-06-02 00:40:56.491597 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 00:40:56.492461 | orchestrator | Monday 02 June 2025 00:40:56 +0000 (0:00:00.495) 0:00:44.519 *********** 2025-06-02 00:40:57.001253 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:40:57.001397 | orchestrator | 2025-06-02 00:40:57.001529 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 00:40:57.002192 | orchestrator | Monday 02 June 2025 00:40:56 +0000 (0:00:00.511) 0:00:45.031 *********** 2025-06-02 00:40:57.159579 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:40:57.160062 | orchestrator | 2025-06-02 00:40:57.160899 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 00:40:57.161975 | orchestrator | Monday 02 June 2025 00:40:57 +0000 (0:00:00.160) 0:00:45.191 *********** 2025-06-02 00:40:57.331352 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'vg_name': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'}) 2025-06-02 00:40:57.331859 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'vg_name': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'}) 2025-06-02 00:40:57.332876 | orchestrator | 2025-06-02 00:40:57.333569 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 00:40:57.334349 | orchestrator | Monday 02 June 2025 00:40:57 +0000 (0:00:00.171) 0:00:45.363 *********** 2025-06-02 00:40:57.489291 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:57.489911 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:57.490558 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:57.491311 | orchestrator | 2025-06-02 00:40:57.494146 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 00:40:57.495708 | orchestrator | Monday 02 June 2025 00:40:57 +0000 (0:00:00.157) 0:00:45.520 *********** 2025-06-02 00:40:57.641690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:57.641815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:57.641919 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:57.642835 | orchestrator | 2025-06-02 00:40:57.643467 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 00:40:57.643895 | orchestrator | Monday 02 June 2025 00:40:57 +0000 (0:00:00.150) 0:00:45.671 *********** 2025-06-02 00:40:57.791516 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'})  2025-06-02 00:40:57.791715 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'})  2025-06-02 00:40:57.792771 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:40:57.793399 | orchestrator | 2025-06-02 00:40:57.795503 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 00:40:57.795987 | orchestrator | Monday 02 June 2025 00:40:57 +0000 (0:00:00.150) 0:00:45.822 *********** 2025-06-02 00:40:58.289347 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 00:40:58.290366 | orchestrator |  "lvm_report": { 2025-06-02 00:40:58.291615 | orchestrator |  "lv": [ 2025-06-02 00:40:58.292471 | orchestrator |  { 2025-06-02 00:40:58.293373 | orchestrator |  "lv_name": "osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf", 2025-06-02 00:40:58.294342 | orchestrator |  "vg_name": "ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf" 2025-06-02 00:40:58.294890 | orchestrator |  }, 2025-06-02 00:40:58.295632 | orchestrator |  { 2025-06-02 00:40:58.296609 | orchestrator |  "lv_name": "osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c", 2025-06-02 00:40:58.296932 | orchestrator |  "vg_name": "ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c" 2025-06-02 00:40:58.297691 | orchestrator |  } 2025-06-02 00:40:58.298152 | orchestrator |  ], 2025-06-02 00:40:58.298610 | orchestrator |  "pv": [ 2025-06-02 00:40:58.299360 | orchestrator |  { 2025-06-02 00:40:58.299747 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 00:40:58.301104 | orchestrator |  "vg_name": "ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c" 2025-06-02 00:40:58.301188 | orchestrator |  }, 2025-06-02 00:40:58.301742 | orchestrator |  { 2025-06-02 00:40:58.302115 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 00:40:58.302491 | orchestrator |  "vg_name": "ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf" 2025-06-02 00:40:58.302831 | orchestrator |  } 2025-06-02 00:40:58.303822 | orchestrator |  ] 2025-06-02 00:40:58.304776 | orchestrator |  } 2025-06-02 00:40:58.305476 | orchestrator | } 2025-06-02 00:40:58.306691 | orchestrator | 2025-06-02 00:40:58.307064 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 00:40:58.308073 | orchestrator | 2025-06-02 00:40:58.308536 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 00:40:58.309278 | orchestrator | Monday 02 June 2025 00:40:58 +0000 (0:00:00.498) 0:00:46.320 *********** 2025-06-02 00:40:58.532633 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 00:40:58.533363 | orchestrator | 2025-06-02 00:40:58.534319 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 00:40:58.535283 | orchestrator | Monday 02 June 2025 00:40:58 +0000 (0:00:00.242) 0:00:46.563 *********** 2025-06-02 00:40:58.754919 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:40:58.756126 | orchestrator | 2025-06-02 00:40:58.756789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:58.757829 | orchestrator | Monday 02 June 2025 00:40:58 +0000 (0:00:00.222) 0:00:46.785 *********** 2025-06-02 00:40:59.152497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 00:40:59.153394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 00:40:59.155293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 00:40:59.156262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 00:40:59.157910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 00:40:59.158626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 00:40:59.159803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 00:40:59.160169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 00:40:59.161545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 00:40:59.161705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 00:40:59.162601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 00:40:59.162877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 00:40:59.163549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 00:40:59.164246 | orchestrator | 2025-06-02 00:40:59.164886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:59.165143 | orchestrator | Monday 02 June 2025 00:40:59 +0000 (0:00:00.396) 0:00:47.182 *********** 2025-06-02 00:40:59.343857 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:40:59.344028 | orchestrator | 2025-06-02 00:40:59.345323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:59.346294 | orchestrator | Monday 02 June 2025 00:40:59 +0000 (0:00:00.192) 0:00:47.375 *********** 2025-06-02 00:40:59.544154 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:40:59.546762 | orchestrator | 2025-06-02 00:40:59.546945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:59.548675 | orchestrator | Monday 02 June 2025 00:40:59 +0000 (0:00:00.200) 0:00:47.575 *********** 2025-06-02 00:40:59.739886 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:40:59.740107 | orchestrator | 2025-06-02 00:40:59.740930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:59.741410 | orchestrator | Monday 02 June 2025 00:40:59 +0000 (0:00:00.195) 0:00:47.771 *********** 2025-06-02 00:40:59.933174 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:40:59.933784 | orchestrator | 2025-06-02 00:40:59.934631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:40:59.935269 | orchestrator | Monday 02 June 2025 00:40:59 +0000 (0:00:00.193) 0:00:47.964 *********** 2025-06-02 00:41:00.114713 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:00.114910 | orchestrator | 2025-06-02 00:41:00.115078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:41:00.115871 | orchestrator | Monday 02 June 2025 00:41:00 +0000 (0:00:00.181) 0:00:48.146 *********** 2025-06-02 00:41:00.675056 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:00.675758 | orchestrator | 2025-06-02 00:41:00.676729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:41:00.677705 | orchestrator | Monday 02 June 2025 00:41:00 +0000 (0:00:00.559) 0:00:48.706 *********** 2025-06-02 00:41:00.876392 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:00.877003 | orchestrator | 2025-06-02 00:41:00.877843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:41:00.878501 | orchestrator | Monday 02 June 2025 00:41:00 +0000 (0:00:00.185) 0:00:48.891 *********** 2025-06-02 00:41:01.050937 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:01.051303 | orchestrator | 2025-06-02 00:41:01.052042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:41:01.052361 | orchestrator | Monday 02 June 2025 00:41:01 +0000 (0:00:00.190) 0:00:49.082 *********** 2025-06-02 00:41:01.443048 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119) 2025-06-02 00:41:01.443145 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119) 2025-06-02 00:41:01.443291 | orchestrator | 2025-06-02 00:41:01.443826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:41:01.444688 | orchestrator | Monday 02 June 2025 00:41:01 +0000 (0:00:00.391) 0:00:49.473 *********** 2025-06-02 00:41:01.866864 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e8887d11-63ae-4566-a11b-b67b45b1443e) 2025-06-02 00:41:01.867031 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e8887d11-63ae-4566-a11b-b67b45b1443e) 2025-06-02 00:41:01.867841 | orchestrator | 2025-06-02 00:41:01.868555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:41:01.869845 | orchestrator | Monday 02 June 2025 00:41:01 +0000 (0:00:00.423) 0:00:49.897 *********** 2025-06-02 00:41:02.267566 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9dfee06d-65ab-44da-8413-6b371a116172) 2025-06-02 00:41:02.268878 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9dfee06d-65ab-44da-8413-6b371a116172) 2025-06-02 00:41:02.268919 | orchestrator | 2025-06-02 00:41:02.268994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:41:02.269425 | orchestrator | Monday 02 June 2025 00:41:02 +0000 (0:00:00.399) 0:00:50.296 *********** 2025-06-02 00:41:02.680949 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0e09092a-0107-49ca-ae5a-eacfcf6197eb) 2025-06-02 00:41:02.681330 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0e09092a-0107-49ca-ae5a-eacfcf6197eb) 2025-06-02 00:41:02.682197 | orchestrator | 2025-06-02 00:41:02.683876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 00:41:02.684318 | orchestrator | Monday 02 June 2025 00:41:02 +0000 (0:00:00.414) 0:00:50.711 *********** 2025-06-02 00:41:03.049326 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 00:41:03.050118 | orchestrator | 2025-06-02 00:41:03.052188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:03.052277 | orchestrator | Monday 02 June 2025 00:41:03 +0000 (0:00:00.366) 0:00:51.077 *********** 2025-06-02 00:41:03.451612 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 00:41:03.452487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 00:41:03.453509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 00:41:03.454435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 00:41:03.455606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 00:41:03.456381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 00:41:03.457290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 00:41:03.457725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 00:41:03.458385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 00:41:03.459277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 00:41:03.459673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 00:41:03.460222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 00:41:03.460602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 00:41:03.461224 | orchestrator | 2025-06-02 00:41:03.461711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:03.462127 | orchestrator | Monday 02 June 2025 00:41:03 +0000 (0:00:00.405) 0:00:51.483 *********** 2025-06-02 00:41:03.632038 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:03.632580 | orchestrator | 2025-06-02 00:41:03.633734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:03.634845 | orchestrator | Monday 02 June 2025 00:41:03 +0000 (0:00:00.179) 0:00:51.663 *********** 2025-06-02 00:41:03.834549 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:03.835111 | orchestrator | 2025-06-02 00:41:03.838234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:03.838651 | orchestrator | Monday 02 June 2025 00:41:03 +0000 (0:00:00.200) 0:00:51.863 *********** 2025-06-02 00:41:04.512747 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:04.512909 | orchestrator | 2025-06-02 00:41:04.513739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:04.514648 | orchestrator | Monday 02 June 2025 00:41:04 +0000 (0:00:00.680) 0:00:52.543 *********** 2025-06-02 00:41:04.736389 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:04.736889 | orchestrator | 2025-06-02 00:41:04.737686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:04.738580 | orchestrator | Monday 02 June 2025 00:41:04 +0000 (0:00:00.223) 0:00:52.767 *********** 2025-06-02 00:41:04.937985 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:04.938376 | orchestrator | 2025-06-02 00:41:04.938392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:04.939233 | orchestrator | Monday 02 June 2025 00:41:04 +0000 (0:00:00.201) 0:00:52.969 *********** 2025-06-02 00:41:05.153003 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:05.153218 | orchestrator | 2025-06-02 00:41:05.154093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:05.156012 | orchestrator | Monday 02 June 2025 00:41:05 +0000 (0:00:00.214) 0:00:53.183 *********** 2025-06-02 00:41:05.366102 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:05.366705 | orchestrator | 2025-06-02 00:41:05.367895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:05.368630 | orchestrator | Monday 02 June 2025 00:41:05 +0000 (0:00:00.214) 0:00:53.397 *********** 2025-06-02 00:41:05.573818 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:05.574685 | orchestrator | 2025-06-02 00:41:05.576209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:05.577075 | orchestrator | Monday 02 June 2025 00:41:05 +0000 (0:00:00.207) 0:00:53.604 *********** 2025-06-02 00:41:06.205324 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 00:41:06.205829 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 00:41:06.206831 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 00:41:06.207354 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 00:41:06.209182 | orchestrator | 2025-06-02 00:41:06.209210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:06.209549 | orchestrator | Monday 02 June 2025 00:41:06 +0000 (0:00:00.630) 0:00:54.235 *********** 2025-06-02 00:41:06.415333 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:06.415557 | orchestrator | 2025-06-02 00:41:06.416193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:06.416718 | orchestrator | Monday 02 June 2025 00:41:06 +0000 (0:00:00.211) 0:00:54.446 *********** 2025-06-02 00:41:06.630361 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:06.630582 | orchestrator | 2025-06-02 00:41:06.630862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:06.631599 | orchestrator | Monday 02 June 2025 00:41:06 +0000 (0:00:00.214) 0:00:54.661 *********** 2025-06-02 00:41:06.827513 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:06.828668 | orchestrator | 2025-06-02 00:41:06.829367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 00:41:06.830458 | orchestrator | Monday 02 June 2025 00:41:06 +0000 (0:00:00.197) 0:00:54.858 *********** 2025-06-02 00:41:07.024824 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:07.025246 | orchestrator | 2025-06-02 00:41:07.026320 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 00:41:07.027036 | orchestrator | Monday 02 June 2025 00:41:07 +0000 (0:00:00.196) 0:00:55.055 *********** 2025-06-02 00:41:07.355091 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:07.355826 | orchestrator | 2025-06-02 00:41:07.357316 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 00:41:07.358890 | orchestrator | Monday 02 June 2025 00:41:07 +0000 (0:00:00.330) 0:00:55.386 *********** 2025-06-02 00:41:07.540816 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'}}) 2025-06-02 00:41:07.540903 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '17a6e190-aa70-5b53-9f6a-9d016360bd22'}}) 2025-06-02 00:41:07.540917 | orchestrator | 2025-06-02 00:41:07.540930 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 00:41:07.541265 | orchestrator | Monday 02 June 2025 00:41:07 +0000 (0:00:00.184) 0:00:55.570 *********** 2025-06-02 00:41:09.368048 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'}) 2025-06-02 00:41:09.368719 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'}) 2025-06-02 00:41:09.370180 | orchestrator | 2025-06-02 00:41:09.370790 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 00:41:09.371705 | orchestrator | Monday 02 June 2025 00:41:09 +0000 (0:00:01.827) 0:00:57.398 *********** 2025-06-02 00:41:09.525760 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:09.525858 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:09.527419 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:09.528188 | orchestrator | 2025-06-02 00:41:09.528707 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 00:41:09.529594 | orchestrator | Monday 02 June 2025 00:41:09 +0000 (0:00:00.155) 0:00:57.553 *********** 2025-06-02 00:41:10.807701 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'}) 2025-06-02 00:41:10.809388 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'}) 2025-06-02 00:41:10.809427 | orchestrator | 2025-06-02 00:41:10.809600 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 00:41:10.810404 | orchestrator | Monday 02 June 2025 00:41:10 +0000 (0:00:01.283) 0:00:58.836 *********** 2025-06-02 00:41:10.952961 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:10.953968 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:10.955654 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:10.955746 | orchestrator | 2025-06-02 00:41:10.955808 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 00:41:10.956429 | orchestrator | Monday 02 June 2025 00:41:10 +0000 (0:00:00.146) 0:00:58.983 *********** 2025-06-02 00:41:11.098662 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:11.098755 | orchestrator | 2025-06-02 00:41:11.098971 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 00:41:11.099246 | orchestrator | Monday 02 June 2025 00:41:11 +0000 (0:00:00.147) 0:00:59.130 *********** 2025-06-02 00:41:11.249852 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:11.250622 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:11.250654 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:11.251672 | orchestrator | 2025-06-02 00:41:11.252398 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 00:41:11.252874 | orchestrator | Monday 02 June 2025 00:41:11 +0000 (0:00:00.151) 0:00:59.281 *********** 2025-06-02 00:41:11.385827 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:11.385919 | orchestrator | 2025-06-02 00:41:11.387669 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 00:41:11.387707 | orchestrator | Monday 02 June 2025 00:41:11 +0000 (0:00:00.133) 0:00:59.415 *********** 2025-06-02 00:41:11.522942 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:11.524322 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:11.525506 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:11.526192 | orchestrator | 2025-06-02 00:41:11.526926 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 00:41:11.527836 | orchestrator | Monday 02 June 2025 00:41:11 +0000 (0:00:00.138) 0:00:59.553 *********** 2025-06-02 00:41:11.644900 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:11.644991 | orchestrator | 2025-06-02 00:41:11.645591 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 00:41:11.646260 | orchestrator | Monday 02 June 2025 00:41:11 +0000 (0:00:00.121) 0:00:59.675 *********** 2025-06-02 00:41:11.807038 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:11.808273 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:11.809116 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:11.809997 | orchestrator | 2025-06-02 00:41:11.811082 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 00:41:11.811730 | orchestrator | Monday 02 June 2025 00:41:11 +0000 (0:00:00.163) 0:00:59.838 *********** 2025-06-02 00:41:11.957859 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:41:11.959443 | orchestrator | 2025-06-02 00:41:11.960338 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 00:41:11.961349 | orchestrator | Monday 02 June 2025 00:41:11 +0000 (0:00:00.150) 0:00:59.989 *********** 2025-06-02 00:41:12.290119 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:12.290220 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:12.292660 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:12.293412 | orchestrator | 2025-06-02 00:41:12.293945 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 00:41:12.294614 | orchestrator | Monday 02 June 2025 00:41:12 +0000 (0:00:00.330) 0:01:00.320 *********** 2025-06-02 00:41:12.451411 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:12.452591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:12.454222 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:12.456243 | orchestrator | 2025-06-02 00:41:12.457174 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 00:41:12.457624 | orchestrator | Monday 02 June 2025 00:41:12 +0000 (0:00:00.163) 0:01:00.483 *********** 2025-06-02 00:41:12.613798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:12.613897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:12.614366 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:12.614661 | orchestrator | 2025-06-02 00:41:12.615285 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 00:41:12.615746 | orchestrator | Monday 02 June 2025 00:41:12 +0000 (0:00:00.161) 0:01:00.645 *********** 2025-06-02 00:41:12.759593 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:12.759771 | orchestrator | 2025-06-02 00:41:12.760409 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 00:41:12.760973 | orchestrator | Monday 02 June 2025 00:41:12 +0000 (0:00:00.146) 0:01:00.791 *********** 2025-06-02 00:41:12.920102 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:12.920617 | orchestrator | 2025-06-02 00:41:12.921431 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 00:41:12.921839 | orchestrator | Monday 02 June 2025 00:41:12 +0000 (0:00:00.160) 0:01:00.952 *********** 2025-06-02 00:41:13.059698 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:13.060042 | orchestrator | 2025-06-02 00:41:13.060529 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 00:41:13.061062 | orchestrator | Monday 02 June 2025 00:41:13 +0000 (0:00:00.139) 0:01:01.091 *********** 2025-06-02 00:41:13.204043 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 00:41:13.205905 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 00:41:13.206615 | orchestrator | } 2025-06-02 00:41:13.207721 | orchestrator | 2025-06-02 00:41:13.208525 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 00:41:13.209386 | orchestrator | Monday 02 June 2025 00:41:13 +0000 (0:00:00.142) 0:01:01.234 *********** 2025-06-02 00:41:13.346932 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 00:41:13.347719 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 00:41:13.348712 | orchestrator | } 2025-06-02 00:41:13.348918 | orchestrator | 2025-06-02 00:41:13.350756 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 00:41:13.351705 | orchestrator | Monday 02 June 2025 00:41:13 +0000 (0:00:00.143) 0:01:01.377 *********** 2025-06-02 00:41:13.478893 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 00:41:13.479135 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 00:41:13.481100 | orchestrator | } 2025-06-02 00:41:13.482406 | orchestrator | 2025-06-02 00:41:13.483513 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 00:41:13.483828 | orchestrator | Monday 02 June 2025 00:41:13 +0000 (0:00:00.132) 0:01:01.510 *********** 2025-06-02 00:41:13.972915 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:41:13.973766 | orchestrator | 2025-06-02 00:41:13.973889 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 00:41:13.975110 | orchestrator | Monday 02 June 2025 00:41:13 +0000 (0:00:00.492) 0:01:02.002 *********** 2025-06-02 00:41:14.511134 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:41:14.511261 | orchestrator | 2025-06-02 00:41:14.512973 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 00:41:14.514852 | orchestrator | Monday 02 June 2025 00:41:14 +0000 (0:00:00.538) 0:01:02.541 *********** 2025-06-02 00:41:15.036774 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:41:15.037938 | orchestrator | 2025-06-02 00:41:15.039526 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 00:41:15.039785 | orchestrator | Monday 02 June 2025 00:41:15 +0000 (0:00:00.523) 0:01:03.065 *********** 2025-06-02 00:41:15.469414 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:41:15.469629 | orchestrator | 2025-06-02 00:41:15.470841 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 00:41:15.471593 | orchestrator | Monday 02 June 2025 00:41:15 +0000 (0:00:00.435) 0:01:03.500 *********** 2025-06-02 00:41:15.592912 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:15.593101 | orchestrator | 2025-06-02 00:41:15.594622 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 00:41:15.595191 | orchestrator | Monday 02 June 2025 00:41:15 +0000 (0:00:00.122) 0:01:03.623 *********** 2025-06-02 00:41:15.706344 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:15.706743 | orchestrator | 2025-06-02 00:41:15.707774 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 00:41:15.708757 | orchestrator | Monday 02 June 2025 00:41:15 +0000 (0:00:00.114) 0:01:03.737 *********** 2025-06-02 00:41:15.866166 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 00:41:15.866376 | orchestrator |  "vgs_report": { 2025-06-02 00:41:15.868602 | orchestrator |  "vg": [] 2025-06-02 00:41:15.868886 | orchestrator |  } 2025-06-02 00:41:15.870056 | orchestrator | } 2025-06-02 00:41:15.870856 | orchestrator | 2025-06-02 00:41:15.872016 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 00:41:15.872543 | orchestrator | Monday 02 June 2025 00:41:15 +0000 (0:00:00.159) 0:01:03.897 *********** 2025-06-02 00:41:15.991714 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:15.991895 | orchestrator | 2025-06-02 00:41:15.992353 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 00:41:15.993359 | orchestrator | Monday 02 June 2025 00:41:15 +0000 (0:00:00.124) 0:01:04.022 *********** 2025-06-02 00:41:16.134824 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:16.135013 | orchestrator | 2025-06-02 00:41:16.136301 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 00:41:16.136680 | orchestrator | Monday 02 June 2025 00:41:16 +0000 (0:00:00.144) 0:01:04.166 *********** 2025-06-02 00:41:16.261216 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:16.261422 | orchestrator | 2025-06-02 00:41:16.262868 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 00:41:16.263499 | orchestrator | Monday 02 June 2025 00:41:16 +0000 (0:00:00.122) 0:01:04.288 *********** 2025-06-02 00:41:16.396954 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:16.397053 | orchestrator | 2025-06-02 00:41:16.397564 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 00:41:16.399273 | orchestrator | Monday 02 June 2025 00:41:16 +0000 (0:00:00.139) 0:01:04.427 *********** 2025-06-02 00:41:16.541917 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:16.542012 | orchestrator | 2025-06-02 00:41:16.543019 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 00:41:16.545752 | orchestrator | Monday 02 June 2025 00:41:16 +0000 (0:00:00.144) 0:01:04.572 *********** 2025-06-02 00:41:16.686173 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:16.686265 | orchestrator | 2025-06-02 00:41:16.686280 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 00:41:16.687032 | orchestrator | Monday 02 June 2025 00:41:16 +0000 (0:00:00.143) 0:01:04.715 *********** 2025-06-02 00:41:16.831444 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:16.832352 | orchestrator | 2025-06-02 00:41:16.834116 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 00:41:16.835392 | orchestrator | Monday 02 June 2025 00:41:16 +0000 (0:00:00.146) 0:01:04.862 *********** 2025-06-02 00:41:16.971574 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:16.973298 | orchestrator | 2025-06-02 00:41:16.974626 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 00:41:16.975934 | orchestrator | Monday 02 June 2025 00:41:16 +0000 (0:00:00.139) 0:01:05.001 *********** 2025-06-02 00:41:17.288424 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:17.289166 | orchestrator | 2025-06-02 00:41:17.290667 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 00:41:17.292705 | orchestrator | Monday 02 June 2025 00:41:17 +0000 (0:00:00.317) 0:01:05.319 *********** 2025-06-02 00:41:17.423582 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:17.425433 | orchestrator | 2025-06-02 00:41:17.425497 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 00:41:17.425512 | orchestrator | Monday 02 June 2025 00:41:17 +0000 (0:00:00.136) 0:01:05.455 *********** 2025-06-02 00:41:17.542202 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:17.542665 | orchestrator | 2025-06-02 00:41:17.546858 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 00:41:17.546970 | orchestrator | Monday 02 June 2025 00:41:17 +0000 (0:00:00.117) 0:01:05.572 *********** 2025-06-02 00:41:17.667403 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:17.667609 | orchestrator | 2025-06-02 00:41:17.670160 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 00:41:17.671029 | orchestrator | Monday 02 June 2025 00:41:17 +0000 (0:00:00.123) 0:01:05.696 *********** 2025-06-02 00:41:17.814697 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:17.815330 | orchestrator | 2025-06-02 00:41:17.816972 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 00:41:17.817319 | orchestrator | Monday 02 June 2025 00:41:17 +0000 (0:00:00.149) 0:01:05.845 *********** 2025-06-02 00:41:17.964602 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:17.965641 | orchestrator | 2025-06-02 00:41:17.967167 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 00:41:17.968035 | orchestrator | Monday 02 June 2025 00:41:17 +0000 (0:00:00.150) 0:01:05.996 *********** 2025-06-02 00:41:18.128420 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:18.129613 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:18.131000 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:18.132237 | orchestrator | 2025-06-02 00:41:18.133127 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 00:41:18.134261 | orchestrator | Monday 02 June 2025 00:41:18 +0000 (0:00:00.163) 0:01:06.159 *********** 2025-06-02 00:41:18.292349 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:18.293101 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:18.294652 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:18.295424 | orchestrator | 2025-06-02 00:41:18.296440 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 00:41:18.297582 | orchestrator | Monday 02 June 2025 00:41:18 +0000 (0:00:00.164) 0:01:06.323 *********** 2025-06-02 00:41:18.447950 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:18.449410 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:18.449668 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:18.450871 | orchestrator | 2025-06-02 00:41:18.451727 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 00:41:18.452956 | orchestrator | Monday 02 June 2025 00:41:18 +0000 (0:00:00.156) 0:01:06.479 *********** 2025-06-02 00:41:18.604000 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:18.604687 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:18.607862 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:18.608905 | orchestrator | 2025-06-02 00:41:18.610258 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 00:41:18.611039 | orchestrator | Monday 02 June 2025 00:41:18 +0000 (0:00:00.155) 0:01:06.634 *********** 2025-06-02 00:41:18.764362 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:18.765741 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:18.767087 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:18.768811 | orchestrator | 2025-06-02 00:41:18.770299 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 00:41:18.770814 | orchestrator | Monday 02 June 2025 00:41:18 +0000 (0:00:00.160) 0:01:06.795 *********** 2025-06-02 00:41:18.907607 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:18.908679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:18.911862 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:18.912967 | orchestrator | 2025-06-02 00:41:18.914088 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 00:41:18.914816 | orchestrator | Monday 02 June 2025 00:41:18 +0000 (0:00:00.142) 0:01:06.938 *********** 2025-06-02 00:41:19.248559 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:19.249015 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:19.250418 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:19.251227 | orchestrator | 2025-06-02 00:41:19.252673 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 00:41:19.253431 | orchestrator | Monday 02 June 2025 00:41:19 +0000 (0:00:00.340) 0:01:07.278 *********** 2025-06-02 00:41:19.396309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:19.397381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:19.398600 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:19.399808 | orchestrator | 2025-06-02 00:41:19.400436 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 00:41:19.401519 | orchestrator | Monday 02 June 2025 00:41:19 +0000 (0:00:00.148) 0:01:07.426 *********** 2025-06-02 00:41:19.893936 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:41:19.895633 | orchestrator | 2025-06-02 00:41:19.896665 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 00:41:19.897981 | orchestrator | Monday 02 June 2025 00:41:19 +0000 (0:00:00.496) 0:01:07.923 *********** 2025-06-02 00:41:20.396696 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:41:20.400024 | orchestrator | 2025-06-02 00:41:20.400228 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 00:41:20.401376 | orchestrator | Monday 02 June 2025 00:41:20 +0000 (0:00:00.503) 0:01:08.427 *********** 2025-06-02 00:41:20.544139 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:41:20.544234 | orchestrator | 2025-06-02 00:41:20.544389 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 00:41:20.544703 | orchestrator | Monday 02 June 2025 00:41:20 +0000 (0:00:00.148) 0:01:08.575 *********** 2025-06-02 00:41:20.708380 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'vg_name': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'}) 2025-06-02 00:41:20.708697 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'vg_name': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'}) 2025-06-02 00:41:20.710688 | orchestrator | 2025-06-02 00:41:20.710734 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 00:41:20.711645 | orchestrator | Monday 02 June 2025 00:41:20 +0000 (0:00:00.162) 0:01:08.737 *********** 2025-06-02 00:41:20.865908 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:20.866280 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:20.866980 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:20.867808 | orchestrator | 2025-06-02 00:41:20.868752 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 00:41:20.869740 | orchestrator | Monday 02 June 2025 00:41:20 +0000 (0:00:00.159) 0:01:08.897 *********** 2025-06-02 00:41:21.023855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:21.024080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:21.025097 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:21.026177 | orchestrator | 2025-06-02 00:41:21.026997 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 00:41:21.028890 | orchestrator | Monday 02 June 2025 00:41:21 +0000 (0:00:00.157) 0:01:09.055 *********** 2025-06-02 00:41:21.171443 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'})  2025-06-02 00:41:21.172737 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'})  2025-06-02 00:41:21.173363 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:21.175633 | orchestrator | 2025-06-02 00:41:21.175729 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 00:41:21.175835 | orchestrator | Monday 02 June 2025 00:41:21 +0000 (0:00:00.147) 0:01:09.203 *********** 2025-06-02 00:41:21.311337 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 00:41:21.312218 | orchestrator |  "lvm_report": { 2025-06-02 00:41:21.313629 | orchestrator |  "lv": [ 2025-06-02 00:41:21.314598 | orchestrator |  { 2025-06-02 00:41:21.315427 | orchestrator |  "lv_name": "osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22", 2025-06-02 00:41:21.316873 | orchestrator |  "vg_name": "ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22" 2025-06-02 00:41:21.318224 | orchestrator |  }, 2025-06-02 00:41:21.318522 | orchestrator |  { 2025-06-02 00:41:21.319561 | orchestrator |  "lv_name": "osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644", 2025-06-02 00:41:21.320303 | orchestrator |  "vg_name": "ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644" 2025-06-02 00:41:21.321143 | orchestrator |  } 2025-06-02 00:41:21.321809 | orchestrator |  ], 2025-06-02 00:41:21.322599 | orchestrator |  "pv": [ 2025-06-02 00:41:21.323601 | orchestrator |  { 2025-06-02 00:41:21.324572 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 00:41:21.325498 | orchestrator |  "vg_name": "ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644" 2025-06-02 00:41:21.326429 | orchestrator |  }, 2025-06-02 00:41:21.327378 | orchestrator |  { 2025-06-02 00:41:21.328025 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 00:41:21.328887 | orchestrator |  "vg_name": "ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22" 2025-06-02 00:41:21.329987 | orchestrator |  } 2025-06-02 00:41:21.330839 | orchestrator |  ] 2025-06-02 00:41:21.331532 | orchestrator |  } 2025-06-02 00:41:21.332253 | orchestrator | } 2025-06-02 00:41:21.333047 | orchestrator | 2025-06-02 00:41:21.333983 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:41:21.334610 | orchestrator | 2025-06-02 00:41:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:41:21.334867 | orchestrator | 2025-06-02 00:41:21 | INFO  | Please wait and do not abort execution. 2025-06-02 00:41:21.335774 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 00:41:21.337046 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 00:41:21.337993 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 00:41:21.338968 | orchestrator | 2025-06-02 00:41:21.340074 | orchestrator | 2025-06-02 00:41:21.340751 | orchestrator | 2025-06-02 00:41:21.341521 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:41:21.342838 | orchestrator | Monday 02 June 2025 00:41:21 +0000 (0:00:00.138) 0:01:09.342 *********** 2025-06-02 00:41:21.343174 | orchestrator | =============================================================================== 2025-06-02 00:41:21.344000 | orchestrator | Create block VGs -------------------------------------------------------- 5.54s 2025-06-02 00:41:21.344864 | orchestrator | Create block LVs -------------------------------------------------------- 3.94s 2025-06-02 00:41:21.345638 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.85s 2025-06-02 00:41:21.347353 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2025-06-02 00:41:21.347990 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.52s 2025-06-02 00:41:21.348969 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.52s 2025-06-02 00:41:21.349970 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.52s 2025-06-02 00:41:21.351107 | orchestrator | Add known partitions to the list of available block devices ------------- 1.43s 2025-06-02 00:41:21.351849 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2025-06-02 00:41:21.352816 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-06-02 00:41:21.353328 | orchestrator | Print LVM report data --------------------------------------------------- 0.92s 2025-06-02 00:41:21.354351 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2025-06-02 00:41:21.354962 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.71s 2025-06-02 00:41:21.357284 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.68s 2025-06-02 00:41:21.358306 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-06-02 00:41:21.358850 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.66s 2025-06-02 00:41:21.359523 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.65s 2025-06-02 00:41:21.360054 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.65s 2025-06-02 00:41:21.360544 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.64s 2025-06-02 00:41:21.361325 | orchestrator | Get initial list of available block devices ----------------------------- 0.64s 2025-06-02 00:41:23.630406 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:41:23.630563 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:41:23.630581 | orchestrator | Registering Redlock._release_script 2025-06-02 00:41:23.692625 | orchestrator | 2025-06-02 00:41:23 | INFO  | Task facb0bda-b6e8-402c-9508-315c1658b2a1 (facts) was prepared for execution. 2025-06-02 00:41:23.692715 | orchestrator | 2025-06-02 00:41:23 | INFO  | It takes a moment until task facb0bda-b6e8-402c-9508-315c1658b2a1 (facts) has been started and output is visible here. 2025-06-02 00:41:27.473886 | orchestrator | 2025-06-02 00:41:27.474368 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 00:41:27.475326 | orchestrator | 2025-06-02 00:41:27.476026 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 00:41:27.477024 | orchestrator | Monday 02 June 2025 00:41:27 +0000 (0:00:00.217) 0:00:00.217 *********** 2025-06-02 00:41:28.445873 | orchestrator | ok: [testbed-manager] 2025-06-02 00:41:28.446875 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:41:28.449006 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:41:28.451091 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:41:28.452169 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:41:28.452610 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:41:28.453540 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:41:28.454589 | orchestrator | 2025-06-02 00:41:28.455242 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 00:41:28.456003 | orchestrator | Monday 02 June 2025 00:41:28 +0000 (0:00:00.965) 0:00:01.182 *********** 2025-06-02 00:41:28.582802 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:41:28.653306 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:41:28.723073 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:41:28.792392 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:41:28.856418 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:41:29.607955 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:41:29.608676 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:29.613612 | orchestrator | 2025-06-02 00:41:29.613651 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 00:41:29.613664 | orchestrator | 2025-06-02 00:41:29.614685 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 00:41:29.614917 | orchestrator | Monday 02 June 2025 00:41:29 +0000 (0:00:01.167) 0:00:02.350 *********** 2025-06-02 00:41:34.357313 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:41:34.358196 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:41:34.359044 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:41:34.360845 | orchestrator | ok: [testbed-manager] 2025-06-02 00:41:34.361981 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:41:34.362835 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:41:34.364230 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:41:34.364590 | orchestrator | 2025-06-02 00:41:34.365344 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 00:41:34.366008 | orchestrator | 2025-06-02 00:41:34.366770 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 00:41:34.367294 | orchestrator | Monday 02 June 2025 00:41:34 +0000 (0:00:04.750) 0:00:07.100 *********** 2025-06-02 00:41:34.524530 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:41:34.608323 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:41:34.724432 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:41:34.806249 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:41:34.889198 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:41:34.938423 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:41:34.939055 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:41:34.939763 | orchestrator | 2025-06-02 00:41:34.940679 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:41:34.941497 | orchestrator | 2025-06-02 00:41:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 00:41:34.941524 | orchestrator | 2025-06-02 00:41:34 | INFO  | Please wait and do not abort execution. 2025-06-02 00:41:34.942014 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:41:34.942606 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:41:34.943403 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:41:34.943744 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:41:34.944484 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:41:34.944768 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:41:34.945414 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:41:34.945691 | orchestrator | 2025-06-02 00:41:34.946756 | orchestrator | 2025-06-02 00:41:34.948158 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:41:34.948727 | orchestrator | Monday 02 June 2025 00:41:34 +0000 (0:00:00.581) 0:00:07.682 *********** 2025-06-02 00:41:34.949082 | orchestrator | =============================================================================== 2025-06-02 00:41:34.949871 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.75s 2025-06-02 00:41:34.950590 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2025-06-02 00:41:34.950895 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.97s 2025-06-02 00:41:34.951553 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-06-02 00:41:35.636297 | orchestrator | 2025-06-02 00:41:35.639600 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Jun 2 00:41:35 UTC 2025 2025-06-02 00:41:35.639642 | orchestrator | 2025-06-02 00:41:37.271820 | orchestrator | 2025-06-02 00:41:37 | INFO  | Collection nutshell is prepared for execution 2025-06-02 00:41:37.271919 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [0] - dotfiles 2025-06-02 00:41:37.276440 | orchestrator | Registering Redlock._acquired_script 2025-06-02 00:41:37.276538 | orchestrator | Registering Redlock._extend_script 2025-06-02 00:41:37.276552 | orchestrator | Registering Redlock._release_script 2025-06-02 00:41:37.280875 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [0] - homer 2025-06-02 00:41:37.280924 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [0] - netdata 2025-06-02 00:41:37.280988 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [0] - openstackclient 2025-06-02 00:41:37.281003 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [0] - phpmyadmin 2025-06-02 00:41:37.281014 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [0] - common 2025-06-02 00:41:37.282785 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [1] -- loadbalancer 2025-06-02 00:41:37.282880 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [2] --- opensearch 2025-06-02 00:41:37.282961 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [2] --- mariadb-ng 2025-06-02 00:41:37.282978 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [3] ---- horizon 2025-06-02 00:41:37.282989 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [3] ---- keystone 2025-06-02 00:41:37.283179 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [4] ----- neutron 2025-06-02 00:41:37.283199 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [5] ------ wait-for-nova 2025-06-02 00:41:37.283259 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [5] ------ octavia 2025-06-02 00:41:37.283599 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [4] ----- barbican 2025-06-02 00:41:37.283623 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [4] ----- designate 2025-06-02 00:41:37.283635 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [4] ----- ironic 2025-06-02 00:41:37.283832 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [4] ----- placement 2025-06-02 00:41:37.283852 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [4] ----- magnum 2025-06-02 00:41:37.284132 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [1] -- openvswitch 2025-06-02 00:41:37.284224 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [2] --- ovn 2025-06-02 00:41:37.284480 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [1] -- memcached 2025-06-02 00:41:37.285003 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [1] -- redis 2025-06-02 00:41:37.285023 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [1] -- rabbitmq-ng 2025-06-02 00:41:37.285034 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [0] - kubernetes 2025-06-02 00:41:37.286476 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [1] -- kubeconfig 2025-06-02 00:41:37.286500 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [1] -- copy-kubeconfig 2025-06-02 00:41:37.286593 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [0] - ceph 2025-06-02 00:41:37.288165 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [1] -- ceph-pools 2025-06-02 00:41:37.288383 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [2] --- copy-ceph-keys 2025-06-02 00:41:37.288401 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [3] ---- cephclient 2025-06-02 00:41:37.288412 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-02 00:41:37.288423 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [4] ----- wait-for-keystone 2025-06-02 00:41:37.288669 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-02 00:41:37.288691 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [5] ------ glance 2025-06-02 00:41:37.288702 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [5] ------ cinder 2025-06-02 00:41:37.288713 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [5] ------ nova 2025-06-02 00:41:37.288963 | orchestrator | 2025-06-02 00:41:37 | INFO  | A [4] ----- prometheus 2025-06-02 00:41:37.289058 | orchestrator | 2025-06-02 00:41:37 | INFO  | D [5] ------ grafana 2025-06-02 00:41:37.475863 | orchestrator | 2025-06-02 00:41:37 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-02 00:41:37.475959 | orchestrator | 2025-06-02 00:41:37 | INFO  | Tasks are running in the background 2025-06-02 00:41:40.029727 | orchestrator | 2025-06-02 00:41:40 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-02 00:41:42.128704 | orchestrator | 2025-06-02 00:41:42 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:41:42.128816 | orchestrator | 2025-06-02 00:41:42 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:41:42.129017 | orchestrator | 2025-06-02 00:41:42 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:41:42.129544 | orchestrator | 2025-06-02 00:41:42 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:41:42.130077 | orchestrator | 2025-06-02 00:41:42 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:41:42.130732 | orchestrator | 2025-06-02 00:41:42 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:41:42.131113 | orchestrator | 2025-06-02 00:41:42 | INFO  | Task 177735e1-af79-4a13-9462-81797314c803 is in state STARTED 2025-06-02 00:41:42.131187 | orchestrator | 2025-06-02 00:41:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:41:45.187678 | orchestrator | 2025-06-02 00:41:45 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:41:45.193722 | orchestrator | 2025-06-02 00:41:45 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:41:45.200648 | orchestrator | 2025-06-02 00:41:45 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:41:45.211527 | orchestrator | 2025-06-02 00:41:45 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:41:45.220587 | orchestrator | 2025-06-02 00:41:45 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:41:45.222978 | orchestrator | 2025-06-02 00:41:45 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:41:45.223619 | orchestrator | 2025-06-02 00:41:45 | INFO  | Task 177735e1-af79-4a13-9462-81797314c803 is in state STARTED 2025-06-02 00:41:45.223645 | orchestrator | 2025-06-02 00:41:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:41:48.255296 | orchestrator | 2025-06-02 00:41:48 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:41:48.255452 | orchestrator | 2025-06-02 00:41:48 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:41:48.256111 | orchestrator | 2025-06-02 00:41:48 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:41:48.257607 | orchestrator | 2025-06-02 00:41:48 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:41:48.261281 | orchestrator | 2025-06-02 00:41:48 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:41:48.261702 | orchestrator | 2025-06-02 00:41:48 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:41:48.263522 | orchestrator | 2025-06-02 00:41:48 | INFO  | Task 177735e1-af79-4a13-9462-81797314c803 is in state STARTED 2025-06-02 00:41:48.263569 | orchestrator | 2025-06-02 00:41:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:41:51.329437 | orchestrator | 2025-06-02 00:41:51 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:41:51.331133 | orchestrator | 2025-06-02 00:41:51 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:41:51.333003 | orchestrator | 2025-06-02 00:41:51 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:41:51.334084 | orchestrator | 2025-06-02 00:41:51 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:41:51.338209 | orchestrator | 2025-06-02 00:41:51 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:41:51.338260 | orchestrator | 2025-06-02 00:41:51 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:41:51.339926 | orchestrator | 2025-06-02 00:41:51 | INFO  | Task 177735e1-af79-4a13-9462-81797314c803 is in state STARTED 2025-06-02 00:41:51.339948 | orchestrator | 2025-06-02 00:41:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:41:54.403524 | orchestrator | 2025-06-02 00:41:54 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:41:54.404407 | orchestrator | 2025-06-02 00:41:54 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:41:54.405438 | orchestrator | 2025-06-02 00:41:54 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:41:54.406355 | orchestrator | 2025-06-02 00:41:54 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:41:54.407901 | orchestrator | 2025-06-02 00:41:54 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:41:54.410885 | orchestrator | 2025-06-02 00:41:54 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:41:54.412265 | orchestrator | 2025-06-02 00:41:54 | INFO  | Task 177735e1-af79-4a13-9462-81797314c803 is in state STARTED 2025-06-02 00:41:54.412291 | orchestrator | 2025-06-02 00:41:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:41:57.462893 | orchestrator | 2025-06-02 00:41:57 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:41:57.463893 | orchestrator | 2025-06-02 00:41:57 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:41:57.465404 | orchestrator | 2025-06-02 00:41:57 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:41:57.466936 | orchestrator | 2025-06-02 00:41:57 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:41:57.467855 | orchestrator | 2025-06-02 00:41:57 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:41:57.469360 | orchestrator | 2025-06-02 00:41:57 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:41:57.470776 | orchestrator | 2025-06-02 00:41:57 | INFO  | Task 177735e1-af79-4a13-9462-81797314c803 is in state STARTED 2025-06-02 00:41:57.470819 | orchestrator | 2025-06-02 00:41:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:00.502664 | orchestrator | 2025-06-02 00:42:00 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:00.503510 | orchestrator | 2025-06-02 00:42:00 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:00.507292 | orchestrator | 2025-06-02 00:42:00 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:42:00.507331 | orchestrator | 2025-06-02 00:42:00 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:42:00.513328 | orchestrator | 2025-06-02 00:42:00 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:00.513358 | orchestrator | 2025-06-02 00:42:00 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:00.513369 | orchestrator | 2025-06-02 00:42:00 | INFO  | Task 177735e1-af79-4a13-9462-81797314c803 is in state STARTED 2025-06-02 00:42:00.513381 | orchestrator | 2025-06-02 00:42:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:03.600705 | orchestrator | 2025-06-02 00:42:03 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:03.600869 | orchestrator | 2025-06-02 00:42:03 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:03.600888 | orchestrator | 2025-06-02 00:42:03 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:42:03.600975 | orchestrator | 2025-06-02 00:42:03 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:42:03.601849 | orchestrator | 2025-06-02 00:42:03 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:03.608029 | orchestrator | 2025-06-02 00:42:03.608066 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-02 00:42:03.608079 | orchestrator | 2025-06-02 00:42:03.608090 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-02 00:42:03.608101 | orchestrator | Monday 02 June 2025 00:41:47 +0000 (0:00:00.453) 0:00:00.453 *********** 2025-06-02 00:42:03.608113 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:03.608127 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:42:03.608139 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:42:03.608150 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:42:03.608161 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:42:03.608172 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:42:03.608183 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:42:03.608194 | orchestrator | 2025-06-02 00:42:03.608206 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-02 00:42:03.608217 | orchestrator | Monday 02 June 2025 00:41:52 +0000 (0:00:04.714) 0:00:05.167 *********** 2025-06-02 00:42:03.608228 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 00:42:03.608239 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 00:42:03.608251 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 00:42:03.608261 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 00:42:03.608272 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 00:42:03.608283 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 00:42:03.608294 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 00:42:03.608305 | orchestrator | 2025-06-02 00:42:03.608316 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-02 00:42:03.608327 | orchestrator | Monday 02 June 2025 00:41:54 +0000 (0:00:01.933) 0:00:07.101 *********** 2025-06-02 00:42:03.608342 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 00:41:53.592728', 'end': '2025-06-02 00:41:53.602126', 'delta': '0:00:00.009398', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 00:42:03.608363 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 00:41:53.467959', 'end': '2025-06-02 00:41:53.473780', 'delta': '0:00:00.005821', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 00:42:03.608389 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 00:41:53.567883', 'end': '2025-06-02 00:41:53.576227', 'delta': '0:00:00.008344', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 00:42:03.608442 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 00:41:53.773184', 'end': '2025-06-02 00:41:53.781852', 'delta': '0:00:00.008668', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 00:42:03.608457 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 00:41:53.887620', 'end': '2025-06-02 00:41:53.896195', 'delta': '0:00:00.008575', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 00:42:03.608487 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 00:41:54.152747', 'end': '2025-06-02 00:41:54.162475', 'delta': '0:00:00.009728', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 00:42:03.608503 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 00:41:54.416561', 'end': '2025-06-02 00:41:54.424867', 'delta': '0:00:00.008306', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 00:42:03.608526 | orchestrator | 2025-06-02 00:42:03.608538 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-02 00:42:03.608550 | orchestrator | Monday 02 June 2025 00:41:56 +0000 (0:00:02.164) 0:00:09.265 *********** 2025-06-02 00:42:03.608561 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 00:42:03.608572 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 00:42:03.608583 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 00:42:03.608594 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 00:42:03.608605 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 00:42:03.608616 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 00:42:03.608627 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 00:42:03.608640 | orchestrator | 2025-06-02 00:42:03.608653 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-02 00:42:03.608667 | orchestrator | Monday 02 June 2025 00:41:58 +0000 (0:00:01.358) 0:00:10.624 *********** 2025-06-02 00:42:03.608679 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-02 00:42:03.608692 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 00:42:03.608705 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 00:42:03.608718 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 00:42:03.608731 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 00:42:03.608744 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 00:42:03.608760 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 00:42:03.608779 | orchestrator | 2025-06-02 00:42:03.608799 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:42:03.608830 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:03.608853 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:03.608876 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:03.608895 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:03.608913 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:03.608926 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:03.608939 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:03.608952 | orchestrator | 2025-06-02 00:42:03.608965 | orchestrator | 2025-06-02 00:42:03.608979 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:42:03.608990 | orchestrator | Monday 02 June 2025 00:42:02 +0000 (0:00:03.972) 0:00:14.596 *********** 2025-06-02 00:42:03.609001 | orchestrator | =============================================================================== 2025-06-02 00:42:03.609012 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.71s 2025-06-02 00:42:03.609023 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.97s 2025-06-02 00:42:03.609034 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.16s 2025-06-02 00:42:03.609052 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.93s 2025-06-02 00:42:03.609063 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.36s 2025-06-02 00:42:03.609100 | orchestrator | 2025-06-02 00:42:03 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:03.609113 | orchestrator | 2025-06-02 00:42:03 | INFO  | Task 177735e1-af79-4a13-9462-81797314c803 is in state SUCCESS 2025-06-02 00:42:03.609124 | orchestrator | 2025-06-02 00:42:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:06.645693 | orchestrator | 2025-06-02 00:42:06 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:06.646405 | orchestrator | 2025-06-02 00:42:06 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:06.649775 | orchestrator | 2025-06-02 00:42:06 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:06.651522 | orchestrator | 2025-06-02 00:42:06 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:42:06.652904 | orchestrator | 2025-06-02 00:42:06 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:42:06.653250 | orchestrator | 2025-06-02 00:42:06 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:06.653810 | orchestrator | 2025-06-02 00:42:06 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:06.653834 | orchestrator | 2025-06-02 00:42:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:09.696712 | orchestrator | 2025-06-02 00:42:09 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:09.696924 | orchestrator | 2025-06-02 00:42:09 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:09.702710 | orchestrator | 2025-06-02 00:42:09 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:09.703351 | orchestrator | 2025-06-02 00:42:09 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:42:09.704157 | orchestrator | 2025-06-02 00:42:09 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:42:09.704414 | orchestrator | 2025-06-02 00:42:09 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:09.705191 | orchestrator | 2025-06-02 00:42:09 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:09.707422 | orchestrator | 2025-06-02 00:42:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:12.758266 | orchestrator | 2025-06-02 00:42:12 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:12.759109 | orchestrator | 2025-06-02 00:42:12 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:12.760113 | orchestrator | 2025-06-02 00:42:12 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:12.760953 | orchestrator | 2025-06-02 00:42:12 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:42:12.762355 | orchestrator | 2025-06-02 00:42:12 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:42:12.763547 | orchestrator | 2025-06-02 00:42:12 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:12.764458 | orchestrator | 2025-06-02 00:42:12 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:12.764534 | orchestrator | 2025-06-02 00:42:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:15.805960 | orchestrator | 2025-06-02 00:42:15 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:15.807772 | orchestrator | 2025-06-02 00:42:15 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:15.812802 | orchestrator | 2025-06-02 00:42:15 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:15.812841 | orchestrator | 2025-06-02 00:42:15 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:42:15.815693 | orchestrator | 2025-06-02 00:42:15 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:42:15.816677 | orchestrator | 2025-06-02 00:42:15 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:15.818818 | orchestrator | 2025-06-02 00:42:15 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:15.818841 | orchestrator | 2025-06-02 00:42:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:18.865034 | orchestrator | 2025-06-02 00:42:18 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:18.868298 | orchestrator | 2025-06-02 00:42:18 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:18.868970 | orchestrator | 2025-06-02 00:42:18 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:18.871223 | orchestrator | 2025-06-02 00:42:18 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state STARTED 2025-06-02 00:42:18.873285 | orchestrator | 2025-06-02 00:42:18 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:42:18.878876 | orchestrator | 2025-06-02 00:42:18 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:18.878921 | orchestrator | 2025-06-02 00:42:18 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:18.880348 | orchestrator | 2025-06-02 00:42:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:21.952192 | orchestrator | 2025-06-02 00:42:21 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:21.953754 | orchestrator | 2025-06-02 00:42:21 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:21.955627 | orchestrator | 2025-06-02 00:42:21 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:21.958693 | orchestrator | 2025-06-02 00:42:21 | INFO  | Task 7c2802c7-86d7-4fb3-8ef4-8b4235d16e58 is in state SUCCESS 2025-06-02 00:42:21.959124 | orchestrator | 2025-06-02 00:42:21 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:42:21.963160 | orchestrator | 2025-06-02 00:42:21 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:21.966523 | orchestrator | 2025-06-02 00:42:21 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:21.966568 | orchestrator | 2025-06-02 00:42:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:25.010612 | orchestrator | 2025-06-02 00:42:25 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:25.012594 | orchestrator | 2025-06-02 00:42:25 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:25.014934 | orchestrator | 2025-06-02 00:42:25 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:25.018355 | orchestrator | 2025-06-02 00:42:25 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:42:25.019533 | orchestrator | 2025-06-02 00:42:25 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:25.020912 | orchestrator | 2025-06-02 00:42:25 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:25.020977 | orchestrator | 2025-06-02 00:42:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:28.075921 | orchestrator | 2025-06-02 00:42:28 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:28.081249 | orchestrator | 2025-06-02 00:42:28 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:28.082991 | orchestrator | 2025-06-02 00:42:28 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:28.092408 | orchestrator | 2025-06-02 00:42:28 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:42:28.104400 | orchestrator | 2025-06-02 00:42:28 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:28.107068 | orchestrator | 2025-06-02 00:42:28 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:28.107113 | orchestrator | 2025-06-02 00:42:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:31.144075 | orchestrator | 2025-06-02 00:42:31 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:31.145240 | orchestrator | 2025-06-02 00:42:31 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:31.145510 | orchestrator | 2025-06-02 00:42:31 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:31.148082 | orchestrator | 2025-06-02 00:42:31 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:42:31.148490 | orchestrator | 2025-06-02 00:42:31 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:31.150331 | orchestrator | 2025-06-02 00:42:31 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:31.154415 | orchestrator | 2025-06-02 00:42:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:34.185638 | orchestrator | 2025-06-02 00:42:34 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:34.185737 | orchestrator | 2025-06-02 00:42:34 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:34.185762 | orchestrator | 2025-06-02 00:42:34 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:34.186948 | orchestrator | 2025-06-02 00:42:34 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state STARTED 2025-06-02 00:42:34.188046 | orchestrator | 2025-06-02 00:42:34 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:34.190593 | orchestrator | 2025-06-02 00:42:34 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:34.191198 | orchestrator | 2025-06-02 00:42:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:37.287555 | orchestrator | 2025-06-02 00:42:37 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:37.294158 | orchestrator | 2025-06-02 00:42:37 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:37.295379 | orchestrator | 2025-06-02 00:42:37 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:37.297742 | orchestrator | 2025-06-02 00:42:37 | INFO  | Task 768e1d8a-a03b-4802-81b9-30b418df4e4e is in state SUCCESS 2025-06-02 00:42:37.300671 | orchestrator | 2025-06-02 00:42:37 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:37.301751 | orchestrator | 2025-06-02 00:42:37 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:37.302099 | orchestrator | 2025-06-02 00:42:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:40.336177 | orchestrator | 2025-06-02 00:42:40 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:40.336950 | orchestrator | 2025-06-02 00:42:40 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:40.342495 | orchestrator | 2025-06-02 00:42:40 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:40.342543 | orchestrator | 2025-06-02 00:42:40 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:40.344645 | orchestrator | 2025-06-02 00:42:40 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:40.344675 | orchestrator | 2025-06-02 00:42:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:43.381351 | orchestrator | 2025-06-02 00:42:43 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:43.383364 | orchestrator | 2025-06-02 00:42:43 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:43.386640 | orchestrator | 2025-06-02 00:42:43 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:43.386680 | orchestrator | 2025-06-02 00:42:43 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:43.386695 | orchestrator | 2025-06-02 00:42:43 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state STARTED 2025-06-02 00:42:43.386711 | orchestrator | 2025-06-02 00:42:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:46.424234 | orchestrator | 2025-06-02 00:42:46 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:46.425912 | orchestrator | 2025-06-02 00:42:46 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:46.427206 | orchestrator | 2025-06-02 00:42:46 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:46.428937 | orchestrator | 2025-06-02 00:42:46 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:46.429888 | orchestrator | 2025-06-02 00:42:46 | INFO  | Task 395470d6-8548-41f2-895e-7ae0e02a055b is in state SUCCESS 2025-06-02 00:42:46.432229 | orchestrator | 2025-06-02 00:42:46.432273 | orchestrator | 2025-06-02 00:42:46.432285 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-02 00:42:46.432297 | orchestrator | 2025-06-02 00:42:46.432309 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-02 00:42:46.432320 | orchestrator | Monday 02 June 2025 00:41:48 +0000 (0:00:00.897) 0:00:00.897 *********** 2025-06-02 00:42:46.432332 | orchestrator | ok: [testbed-manager] => { 2025-06-02 00:42:46.432347 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-02 00:42:46.432360 | orchestrator | } 2025-06-02 00:42:46.432372 | orchestrator | 2025-06-02 00:42:46.432383 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-02 00:42:46.432400 | orchestrator | Monday 02 June 2025 00:41:49 +0000 (0:00:00.387) 0:00:01.285 *********** 2025-06-02 00:42:46.432411 | orchestrator | ok: [testbed-manager] 2025-06-02 00:42:46.432424 | orchestrator | 2025-06-02 00:42:46.432435 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-02 00:42:46.432501 | orchestrator | Monday 02 June 2025 00:41:50 +0000 (0:00:01.717) 0:00:03.002 *********** 2025-06-02 00:42:46.432515 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-02 00:42:46.432526 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-02 00:42:46.432536 | orchestrator | 2025-06-02 00:42:46.432547 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-02 00:42:46.432558 | orchestrator | Monday 02 June 2025 00:41:52 +0000 (0:00:01.171) 0:00:04.174 *********** 2025-06-02 00:42:46.432568 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.432580 | orchestrator | 2025-06-02 00:42:46.432591 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-02 00:42:46.432602 | orchestrator | Monday 02 June 2025 00:41:54 +0000 (0:00:02.193) 0:00:06.367 *********** 2025-06-02 00:42:46.432612 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.432623 | orchestrator | 2025-06-02 00:42:46.432634 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-02 00:42:46.432645 | orchestrator | Monday 02 June 2025 00:41:55 +0000 (0:00:01.140) 0:00:07.508 *********** 2025-06-02 00:42:46.432655 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-02 00:42:46.432666 | orchestrator | ok: [testbed-manager] 2025-06-02 00:42:46.432677 | orchestrator | 2025-06-02 00:42:46.432688 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-02 00:42:46.432698 | orchestrator | Monday 02 June 2025 00:42:19 +0000 (0:00:23.704) 0:00:31.213 *********** 2025-06-02 00:42:46.432709 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.432720 | orchestrator | 2025-06-02 00:42:46.432730 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:42:46.432742 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:46.432754 | orchestrator | 2025-06-02 00:42:46.432765 | orchestrator | 2025-06-02 00:42:46.432775 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:42:46.432786 | orchestrator | Monday 02 June 2025 00:42:21 +0000 (0:00:02.024) 0:00:33.237 *********** 2025-06-02 00:42:46.432797 | orchestrator | =============================================================================== 2025-06-02 00:42:46.432807 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 23.70s 2025-06-02 00:42:46.432820 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.19s 2025-06-02 00:42:46.432834 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.02s 2025-06-02 00:42:46.432847 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.72s 2025-06-02 00:42:46.432859 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.17s 2025-06-02 00:42:46.432871 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.14s 2025-06-02 00:42:46.432884 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.39s 2025-06-02 00:42:46.432896 | orchestrator | 2025-06-02 00:42:46.432909 | orchestrator | 2025-06-02 00:42:46.432921 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-02 00:42:46.432934 | orchestrator | 2025-06-02 00:42:46.432946 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-02 00:42:46.432958 | orchestrator | Monday 02 June 2025 00:41:48 +0000 (0:00:00.885) 0:00:00.885 *********** 2025-06-02 00:42:46.432972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-02 00:42:46.432985 | orchestrator | 2025-06-02 00:42:46.432999 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-02 00:42:46.433011 | orchestrator | Monday 02 June 2025 00:41:49 +0000 (0:00:00.857) 0:00:01.742 *********** 2025-06-02 00:42:46.433031 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-02 00:42:46.433045 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-02 00:42:46.433057 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-02 00:42:46.433069 | orchestrator | 2025-06-02 00:42:46.433082 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-02 00:42:46.433094 | orchestrator | Monday 02 June 2025 00:41:51 +0000 (0:00:02.125) 0:00:03.868 *********** 2025-06-02 00:42:46.433107 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.433119 | orchestrator | 2025-06-02 00:42:46.433132 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-02 00:42:46.433145 | orchestrator | Monday 02 June 2025 00:41:53 +0000 (0:00:01.379) 0:00:05.248 *********** 2025-06-02 00:42:46.433171 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-02 00:42:46.433183 | orchestrator | ok: [testbed-manager] 2025-06-02 00:42:46.433194 | orchestrator | 2025-06-02 00:42:46.433205 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-02 00:42:46.433216 | orchestrator | Monday 02 June 2025 00:42:28 +0000 (0:00:35.220) 0:00:40.468 *********** 2025-06-02 00:42:46.433227 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.433237 | orchestrator | 2025-06-02 00:42:46.433248 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-02 00:42:46.433259 | orchestrator | Monday 02 June 2025 00:42:29 +0000 (0:00:01.400) 0:00:41.869 *********** 2025-06-02 00:42:46.433269 | orchestrator | ok: [testbed-manager] 2025-06-02 00:42:46.433280 | orchestrator | 2025-06-02 00:42:46.433296 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-02 00:42:46.433307 | orchestrator | Monday 02 June 2025 00:42:30 +0000 (0:00:00.549) 0:00:42.418 *********** 2025-06-02 00:42:46.433318 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.433329 | orchestrator | 2025-06-02 00:42:46.433339 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-02 00:42:46.433350 | orchestrator | Monday 02 June 2025 00:42:32 +0000 (0:00:01.620) 0:00:44.039 *********** 2025-06-02 00:42:46.433361 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.433372 | orchestrator | 2025-06-02 00:42:46.433382 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-02 00:42:46.433393 | orchestrator | Monday 02 June 2025 00:42:32 +0000 (0:00:00.812) 0:00:44.852 *********** 2025-06-02 00:42:46.433403 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.433414 | orchestrator | 2025-06-02 00:42:46.433425 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-02 00:42:46.433435 | orchestrator | Monday 02 June 2025 00:42:33 +0000 (0:00:00.607) 0:00:45.459 *********** 2025-06-02 00:42:46.433446 | orchestrator | ok: [testbed-manager] 2025-06-02 00:42:46.433457 | orchestrator | 2025-06-02 00:42:46.433490 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:42:46.433502 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:46.433513 | orchestrator | 2025-06-02 00:42:46.433524 | orchestrator | 2025-06-02 00:42:46.433534 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:42:46.433545 | orchestrator | Monday 02 June 2025 00:42:33 +0000 (0:00:00.434) 0:00:45.893 *********** 2025-06-02 00:42:46.433555 | orchestrator | =============================================================================== 2025-06-02 00:42:46.433566 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.22s 2025-06-02 00:42:46.433577 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.13s 2025-06-02 00:42:46.433588 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.62s 2025-06-02 00:42:46.433599 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.40s 2025-06-02 00:42:46.433616 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.38s 2025-06-02 00:42:46.433627 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.86s 2025-06-02 00:42:46.433638 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.81s 2025-06-02 00:42:46.433649 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.61s 2025-06-02 00:42:46.433660 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.55s 2025-06-02 00:42:46.433670 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.43s 2025-06-02 00:42:46.433681 | orchestrator | 2025-06-02 00:42:46.433692 | orchestrator | 2025-06-02 00:42:46.433702 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:42:46.433713 | orchestrator | 2025-06-02 00:42:46.433724 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:42:46.433735 | orchestrator | Monday 02 June 2025 00:41:48 +0000 (0:00:00.509) 0:00:00.509 *********** 2025-06-02 00:42:46.433746 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-02 00:42:46.433756 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-02 00:42:46.433767 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-02 00:42:46.433778 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-02 00:42:46.433789 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-02 00:42:46.433799 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-02 00:42:46.433810 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-02 00:42:46.433820 | orchestrator | 2025-06-02 00:42:46.433831 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-02 00:42:46.433842 | orchestrator | 2025-06-02 00:42:46.433853 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-02 00:42:46.433863 | orchestrator | Monday 02 June 2025 00:41:50 +0000 (0:00:02.703) 0:00:03.212 *********** 2025-06-02 00:42:46.433888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:42:46.433902 | orchestrator | 2025-06-02 00:42:46.433913 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-02 00:42:46.433923 | orchestrator | Monday 02 June 2025 00:41:53 +0000 (0:00:02.551) 0:00:05.764 *********** 2025-06-02 00:42:46.433934 | orchestrator | ok: [testbed-manager] 2025-06-02 00:42:46.433945 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:42:46.433956 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:42:46.433967 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:42:46.433979 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:42:46.433995 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:42:46.434006 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:42:46.434075 | orchestrator | 2025-06-02 00:42:46.434090 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-02 00:42:46.434101 | orchestrator | Monday 02 June 2025 00:41:55 +0000 (0:00:02.022) 0:00:07.786 *********** 2025-06-02 00:42:46.434111 | orchestrator | ok: [testbed-manager] 2025-06-02 00:42:46.434122 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:42:46.434134 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:42:46.434145 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:42:46.434156 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:42:46.434167 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:42:46.434178 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:42:46.434189 | orchestrator | 2025-06-02 00:42:46.434200 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-02 00:42:46.434212 | orchestrator | Monday 02 June 2025 00:41:57 +0000 (0:00:02.615) 0:00:10.401 *********** 2025-06-02 00:42:46.434223 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.434241 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:42:46.434252 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:42:46.434264 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:42:46.434275 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:42:46.434286 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:42:46.434297 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:42:46.434307 | orchestrator | 2025-06-02 00:42:46.434318 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-02 00:42:46.434329 | orchestrator | Monday 02 June 2025 00:42:00 +0000 (0:00:02.137) 0:00:12.538 *********** 2025-06-02 00:42:46.434340 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.434351 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:42:46.434362 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:42:46.434373 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:42:46.434384 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:42:46.434395 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:42:46.434406 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:42:46.434417 | orchestrator | 2025-06-02 00:42:46.434428 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-02 00:42:46.434439 | orchestrator | Monday 02 June 2025 00:42:09 +0000 (0:00:09.490) 0:00:22.029 *********** 2025-06-02 00:42:46.434450 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.434460 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:42:46.434525 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:42:46.434537 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:42:46.434548 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:42:46.434558 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:42:46.434569 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:42:46.434580 | orchestrator | 2025-06-02 00:42:46.434591 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-02 00:42:46.434602 | orchestrator | Monday 02 June 2025 00:42:24 +0000 (0:00:15.281) 0:00:37.310 *********** 2025-06-02 00:42:46.434613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:42:46.434626 | orchestrator | 2025-06-02 00:42:46.434637 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-02 00:42:46.434648 | orchestrator | Monday 02 June 2025 00:42:25 +0000 (0:00:01.143) 0:00:38.454 *********** 2025-06-02 00:42:46.434659 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-02 00:42:46.434670 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-02 00:42:46.434681 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-02 00:42:46.434692 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-02 00:42:46.434703 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-02 00:42:46.434714 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-02 00:42:46.434725 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-02 00:42:46.434735 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-02 00:42:46.434746 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-02 00:42:46.434757 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-02 00:42:46.434767 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-02 00:42:46.434778 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-02 00:42:46.434789 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-02 00:42:46.434799 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-02 00:42:46.434810 | orchestrator | 2025-06-02 00:42:46.434821 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-02 00:42:46.434832 | orchestrator | Monday 02 June 2025 00:42:30 +0000 (0:00:04.876) 0:00:43.331 *********** 2025-06-02 00:42:46.434858 | orchestrator | ok: [testbed-manager] 2025-06-02 00:42:46.434869 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:42:46.434880 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:42:46.434891 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:42:46.434902 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:42:46.434913 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:42:46.434924 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:42:46.434935 | orchestrator | 2025-06-02 00:42:46.434946 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-02 00:42:46.434957 | orchestrator | Monday 02 June 2025 00:42:32 +0000 (0:00:01.163) 0:00:44.495 *********** 2025-06-02 00:42:46.434967 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:42:46.434978 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.434989 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:42:46.435000 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:42:46.435011 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:42:46.435022 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:42:46.435033 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:42:46.435043 | orchestrator | 2025-06-02 00:42:46.435055 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-02 00:42:46.435073 | orchestrator | Monday 02 June 2025 00:42:33 +0000 (0:00:01.801) 0:00:46.296 *********** 2025-06-02 00:42:46.435085 | orchestrator | ok: [testbed-manager] 2025-06-02 00:42:46.435096 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:42:46.435107 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:42:46.435118 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:42:46.435128 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:42:46.435139 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:42:46.435150 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:42:46.435161 | orchestrator | 2025-06-02 00:42:46.435172 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-02 00:42:46.435215 | orchestrator | Monday 02 June 2025 00:42:35 +0000 (0:00:01.723) 0:00:48.020 *********** 2025-06-02 00:42:46.435227 | orchestrator | ok: [testbed-manager] 2025-06-02 00:42:46.435238 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:42:46.435249 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:42:46.435260 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:42:46.435275 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:42:46.435286 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:42:46.435297 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:42:46.435308 | orchestrator | 2025-06-02 00:42:46.435319 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-02 00:42:46.435330 | orchestrator | Monday 02 June 2025 00:42:37 +0000 (0:00:01.755) 0:00:49.775 *********** 2025-06-02 00:42:46.435341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-02 00:42:46.435354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:42:46.435365 | orchestrator | 2025-06-02 00:42:46.435376 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-02 00:42:46.435386 | orchestrator | Monday 02 June 2025 00:42:38 +0000 (0:00:01.310) 0:00:51.085 *********** 2025-06-02 00:42:46.435397 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.435408 | orchestrator | 2025-06-02 00:42:46.435419 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-02 00:42:46.435430 | orchestrator | Monday 02 June 2025 00:42:40 +0000 (0:00:01.718) 0:00:52.803 *********** 2025-06-02 00:42:46.435440 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:42:46.435451 | orchestrator | changed: [testbed-manager] 2025-06-02 00:42:46.435462 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:42:46.435503 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:42:46.435523 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:42:46.435543 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:42:46.435554 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:42:46.435565 | orchestrator | 2025-06-02 00:42:46.435576 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:42:46.435587 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:46.435598 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:46.435609 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:46.435620 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:46.435631 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:46.435642 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:46.435653 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:42:46.435663 | orchestrator | 2025-06-02 00:42:46.435674 | orchestrator | 2025-06-02 00:42:46.435685 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:42:46.435696 | orchestrator | Monday 02 June 2025 00:42:43 +0000 (0:00:03.314) 0:00:56.118 *********** 2025-06-02 00:42:46.435707 | orchestrator | =============================================================================== 2025-06-02 00:42:46.435718 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 15.28s 2025-06-02 00:42:46.435729 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.49s 2025-06-02 00:42:46.435739 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.88s 2025-06-02 00:42:46.435750 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.31s 2025-06-02 00:42:46.435761 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.70s 2025-06-02 00:42:46.435771 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.62s 2025-06-02 00:42:46.435782 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.55s 2025-06-02 00:42:46.435793 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.14s 2025-06-02 00:42:46.435803 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.02s 2025-06-02 00:42:46.435814 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.80s 2025-06-02 00:42:46.435824 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.76s 2025-06-02 00:42:46.435842 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.72s 2025-06-02 00:42:46.435853 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.72s 2025-06-02 00:42:46.435864 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.31s 2025-06-02 00:42:46.435875 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.16s 2025-06-02 00:42:46.435886 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.14s 2025-06-02 00:42:46.435897 | orchestrator | 2025-06-02 00:42:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:49.474317 | orchestrator | 2025-06-02 00:42:49 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:49.476637 | orchestrator | 2025-06-02 00:42:49 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:49.479531 | orchestrator | 2025-06-02 00:42:49 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:49.481096 | orchestrator | 2025-06-02 00:42:49 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:49.481234 | orchestrator | 2025-06-02 00:42:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:52.513592 | orchestrator | 2025-06-02 00:42:52 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:52.513938 | orchestrator | 2025-06-02 00:42:52 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:52.514905 | orchestrator | 2025-06-02 00:42:52 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:52.515595 | orchestrator | 2025-06-02 00:42:52 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:52.515841 | orchestrator | 2025-06-02 00:42:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:55.552647 | orchestrator | 2025-06-02 00:42:55 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:55.552829 | orchestrator | 2025-06-02 00:42:55 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:55.555289 | orchestrator | 2025-06-02 00:42:55 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:55.557636 | orchestrator | 2025-06-02 00:42:55 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:55.557667 | orchestrator | 2025-06-02 00:42:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:42:58.598527 | orchestrator | 2025-06-02 00:42:58 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:42:58.598865 | orchestrator | 2025-06-02 00:42:58 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:42:58.600073 | orchestrator | 2025-06-02 00:42:58 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:42:58.601341 | orchestrator | 2025-06-02 00:42:58 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:42:58.601364 | orchestrator | 2025-06-02 00:42:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:01.644831 | orchestrator | 2025-06-02 00:43:01 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:43:01.646766 | orchestrator | 2025-06-02 00:43:01 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:01.648360 | orchestrator | 2025-06-02 00:43:01 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:01.650140 | orchestrator | 2025-06-02 00:43:01 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:01.650170 | orchestrator | 2025-06-02 00:43:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:04.696557 | orchestrator | 2025-06-02 00:43:04 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:43:04.699005 | orchestrator | 2025-06-02 00:43:04 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:04.700105 | orchestrator | 2025-06-02 00:43:04 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:04.702228 | orchestrator | 2025-06-02 00:43:04 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:04.702559 | orchestrator | 2025-06-02 00:43:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:07.745924 | orchestrator | 2025-06-02 00:43:07 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:43:07.752405 | orchestrator | 2025-06-02 00:43:07 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:07.760699 | orchestrator | 2025-06-02 00:43:07 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:07.762506 | orchestrator | 2025-06-02 00:43:07 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:07.762526 | orchestrator | 2025-06-02 00:43:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:10.806155 | orchestrator | 2025-06-02 00:43:10 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:43:10.808193 | orchestrator | 2025-06-02 00:43:10 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:10.810132 | orchestrator | 2025-06-02 00:43:10 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:10.811360 | orchestrator | 2025-06-02 00:43:10 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:10.811387 | orchestrator | 2025-06-02 00:43:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:13.851713 | orchestrator | 2025-06-02 00:43:13 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:43:13.853366 | orchestrator | 2025-06-02 00:43:13 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:13.856619 | orchestrator | 2025-06-02 00:43:13 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:13.856663 | orchestrator | 2025-06-02 00:43:13 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:13.856674 | orchestrator | 2025-06-02 00:43:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:16.927067 | orchestrator | 2025-06-02 00:43:16 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:43:16.927426 | orchestrator | 2025-06-02 00:43:16 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:16.928133 | orchestrator | 2025-06-02 00:43:16 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:16.928982 | orchestrator | 2025-06-02 00:43:16 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:16.929007 | orchestrator | 2025-06-02 00:43:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:19.965788 | orchestrator | 2025-06-02 00:43:19 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:43:19.966164 | orchestrator | 2025-06-02 00:43:19 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:19.968634 | orchestrator | 2025-06-02 00:43:19 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:19.969459 | orchestrator | 2025-06-02 00:43:19 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:19.969517 | orchestrator | 2025-06-02 00:43:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:23.012807 | orchestrator | 2025-06-02 00:43:23 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:43:23.016586 | orchestrator | 2025-06-02 00:43:23 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:23.019303 | orchestrator | 2025-06-02 00:43:23 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:23.020856 | orchestrator | 2025-06-02 00:43:23 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:23.021228 | orchestrator | 2025-06-02 00:43:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:26.073415 | orchestrator | 2025-06-02 00:43:26 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state STARTED 2025-06-02 00:43:26.073559 | orchestrator | 2025-06-02 00:43:26 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:26.073577 | orchestrator | 2025-06-02 00:43:26 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:26.073589 | orchestrator | 2025-06-02 00:43:26 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:26.073601 | orchestrator | 2025-06-02 00:43:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:29.109399 | orchestrator | 2025-06-02 00:43:29 | INFO  | Task ba1327b0-2955-44d4-bb43-8c7b1cf80695 is in state SUCCESS 2025-06-02 00:43:29.109546 | orchestrator | 2025-06-02 00:43:29 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:29.109935 | orchestrator | 2025-06-02 00:43:29 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:29.110996 | orchestrator | 2025-06-02 00:43:29 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:29.111022 | orchestrator | 2025-06-02 00:43:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:32.145946 | orchestrator | 2025-06-02 00:43:32 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:32.146230 | orchestrator | 2025-06-02 00:43:32 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:32.146873 | orchestrator | 2025-06-02 00:43:32 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:32.146895 | orchestrator | 2025-06-02 00:43:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:35.190354 | orchestrator | 2025-06-02 00:43:35 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:35.191665 | orchestrator | 2025-06-02 00:43:35 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:35.193777 | orchestrator | 2025-06-02 00:43:35 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:35.193787 | orchestrator | 2025-06-02 00:43:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:38.234601 | orchestrator | 2025-06-02 00:43:38 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:38.237943 | orchestrator | 2025-06-02 00:43:38 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:38.239032 | orchestrator | 2025-06-02 00:43:38 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:38.239641 | orchestrator | 2025-06-02 00:43:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:41.277993 | orchestrator | 2025-06-02 00:43:41 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:41.280308 | orchestrator | 2025-06-02 00:43:41 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:41.281304 | orchestrator | 2025-06-02 00:43:41 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:41.281365 | orchestrator | 2025-06-02 00:43:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:44.332950 | orchestrator | 2025-06-02 00:43:44 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:44.333606 | orchestrator | 2025-06-02 00:43:44 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:44.334576 | orchestrator | 2025-06-02 00:43:44 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:44.334601 | orchestrator | 2025-06-02 00:43:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:47.383588 | orchestrator | 2025-06-02 00:43:47 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:47.388070 | orchestrator | 2025-06-02 00:43:47 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:47.388467 | orchestrator | 2025-06-02 00:43:47 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:47.388711 | orchestrator | 2025-06-02 00:43:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:50.435158 | orchestrator | 2025-06-02 00:43:50 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:50.435293 | orchestrator | 2025-06-02 00:43:50 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:50.435381 | orchestrator | 2025-06-02 00:43:50 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:50.436509 | orchestrator | 2025-06-02 00:43:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:53.471757 | orchestrator | 2025-06-02 00:43:53 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:53.472585 | orchestrator | 2025-06-02 00:43:53 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:53.475376 | orchestrator | 2025-06-02 00:43:53 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:53.475387 | orchestrator | 2025-06-02 00:43:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:56.513461 | orchestrator | 2025-06-02 00:43:56 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:56.515290 | orchestrator | 2025-06-02 00:43:56 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:56.517124 | orchestrator | 2025-06-02 00:43:56 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:56.517146 | orchestrator | 2025-06-02 00:43:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:43:59.560688 | orchestrator | 2025-06-02 00:43:59 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:43:59.561760 | orchestrator | 2025-06-02 00:43:59 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state STARTED 2025-06-02 00:43:59.563597 | orchestrator | 2025-06-02 00:43:59 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:43:59.563879 | orchestrator | 2025-06-02 00:43:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:02.596461 | orchestrator | 2025-06-02 00:44:02 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:02.599248 | orchestrator | 2025-06-02 00:44:02 | INFO  | Task 991243d6-fb90-4fd6-85ca-d4ed258b4912 is in state SUCCESS 2025-06-02 00:44:02.601187 | orchestrator | 2025-06-02 00:44:02.601236 | orchestrator | 2025-06-02 00:44:02.601249 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-02 00:44:02.601262 | orchestrator | 2025-06-02 00:44:02.601274 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-02 00:44:02.601286 | orchestrator | Monday 02 June 2025 00:42:07 +0000 (0:00:00.221) 0:00:00.221 *********** 2025-06-02 00:44:02.601298 | orchestrator | ok: [testbed-manager] 2025-06-02 00:44:02.601312 | orchestrator | 2025-06-02 00:44:02.601324 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-02 00:44:02.601360 | orchestrator | Monday 02 June 2025 00:42:08 +0000 (0:00:00.817) 0:00:01.038 *********** 2025-06-02 00:44:02.601373 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-02 00:44:02.601384 | orchestrator | 2025-06-02 00:44:02.601395 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-02 00:44:02.601406 | orchestrator | Monday 02 June 2025 00:42:09 +0000 (0:00:00.585) 0:00:01.624 *********** 2025-06-02 00:44:02.601417 | orchestrator | changed: [testbed-manager] 2025-06-02 00:44:02.601428 | orchestrator | 2025-06-02 00:44:02.601439 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-02 00:44:02.601450 | orchestrator | Monday 02 June 2025 00:42:11 +0000 (0:00:01.855) 0:00:03.480 *********** 2025-06-02 00:44:02.601461 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-02 00:44:02.601472 | orchestrator | ok: [testbed-manager] 2025-06-02 00:44:02.601513 | orchestrator | 2025-06-02 00:44:02.601524 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-02 00:44:02.601535 | orchestrator | Monday 02 June 2025 00:43:03 +0000 (0:00:52.606) 0:00:56.087 *********** 2025-06-02 00:44:02.601546 | orchestrator | changed: [testbed-manager] 2025-06-02 00:44:02.601557 | orchestrator | 2025-06-02 00:44:02.601568 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:44:02.601579 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:44:02.601593 | orchestrator | 2025-06-02 00:44:02.601604 | orchestrator | 2025-06-02 00:44:02.601615 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:44:02.601627 | orchestrator | Monday 02 June 2025 00:43:27 +0000 (0:00:23.501) 0:01:19.588 *********** 2025-06-02 00:44:02.601637 | orchestrator | =============================================================================== 2025-06-02 00:44:02.601648 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 52.61s 2025-06-02 00:44:02.601659 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 23.50s 2025-06-02 00:44:02.601670 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.86s 2025-06-02 00:44:02.601681 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.82s 2025-06-02 00:44:02.601693 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.59s 2025-06-02 00:44:02.601704 | orchestrator | 2025-06-02 00:44:02.601715 | orchestrator | 2025-06-02 00:44:02.601726 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-02 00:44:02.601737 | orchestrator | 2025-06-02 00:44:02.601748 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 00:44:02.601759 | orchestrator | Monday 02 June 2025 00:41:41 +0000 (0:00:00.245) 0:00:00.245 *********** 2025-06-02 00:44:02.601773 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:44:02.601790 | orchestrator | 2025-06-02 00:44:02.601803 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-02 00:44:02.601815 | orchestrator | Monday 02 June 2025 00:41:43 +0000 (0:00:01.261) 0:00:01.506 *********** 2025-06-02 00:44:02.601828 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 00:44:02.601841 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 00:44:02.601854 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 00:44:02.601868 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 00:44:02.601881 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 00:44:02.601893 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 00:44:02.601914 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 00:44:02.601927 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 00:44:02.601942 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 00:44:02.601955 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 00:44:02.601968 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 00:44:02.601986 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 00:44:02.601999 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 00:44:02.602013 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 00:44:02.602114 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 00:44:02.602129 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 00:44:02.602157 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 00:44:02.602169 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 00:44:02.602180 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 00:44:02.602192 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 00:44:02.602203 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 00:44:02.602213 | orchestrator | 2025-06-02 00:44:02.602224 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 00:44:02.602235 | orchestrator | Monday 02 June 2025 00:41:47 +0000 (0:00:04.235) 0:00:05.741 *********** 2025-06-02 00:44:02.602246 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:44:02.602260 | orchestrator | 2025-06-02 00:44:02.602271 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-02 00:44:02.602282 | orchestrator | Monday 02 June 2025 00:41:48 +0000 (0:00:01.337) 0:00:07.079 *********** 2025-06-02 00:44:02.602298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.602314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.602326 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.602346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.602358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.602397 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602453 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.602471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.602542 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602611 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602650 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.602662 | orchestrator | 2025-06-02 00:44:02.602674 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-02 00:44:02.602691 | orchestrator | Monday 02 June 2025 00:41:54 +0000 (0:00:05.294) 0:00:12.374 *********** 2025-06-02 00:44:02.602704 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.602716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.602728 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.602739 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:44:02.602759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.602771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.602783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.602794 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:44:02.602810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.602841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.602853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.602865 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:44:02.602877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.602895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.602907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.602919 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:44:02.602931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.602947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.602965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.602977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.602989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603019 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:44:02.603030 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:44:02.603050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.603062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603085 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:44:02.603097 | orchestrator | 2025-06-02 00:44:02.603108 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-02 00:44:02.603119 | orchestrator | Monday 02 June 2025 00:41:55 +0000 (0:00:01.200) 0:00:13.574 *********** 2025-06-02 00:44:02.603136 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.603154 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603167 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.603196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603219 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:44:02.603230 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:44:02.603242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.603258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603708 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:44:02.603728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.603769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.603807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603830 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:44:02.603842 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:44:02.603854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.603884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603916 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:44:02.603929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 00:44:02.603941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.603976 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:44:02.603988 | orchestrator | 2025-06-02 00:44:02.604002 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-02 00:44:02.604014 | orchestrator | Monday 02 June 2025 00:41:57 +0000 (0:00:02.692) 0:00:16.266 *********** 2025-06-02 00:44:02.604026 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:44:02.604037 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:44:02.604049 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:44:02.604062 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:44:02.604082 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:44:02.604100 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:44:02.604119 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:44:02.604138 | orchestrator | 2025-06-02 00:44:02.604158 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-02 00:44:02.604177 | orchestrator | Monday 02 June 2025 00:41:58 +0000 (0:00:01.019) 0:00:17.286 *********** 2025-06-02 00:44:02.604195 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:44:02.604216 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:44:02.604236 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:44:02.604257 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:44:02.604271 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:44:02.604285 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:44:02.604296 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:44:02.604307 | orchestrator | 2025-06-02 00:44:02.604335 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-02 00:44:02.604347 | orchestrator | Monday 02 June 2025 00:41:59 +0000 (0:00:00.938) 0:00:18.225 *********** 2025-06-02 00:44:02.604369 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.604383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.604395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.604407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.604420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.604431 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604514 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.604527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.604539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604550 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604562 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604627 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604640 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.604697 | orchestrator | 2025-06-02 00:44:02.604708 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-02 00:44:02.604720 | orchestrator | Monday 02 June 2025 00:42:05 +0000 (0:00:05.334) 0:00:23.559 *********** 2025-06-02 00:44:02.604732 | orchestrator | [WARNING]: Skipped 2025-06-02 00:44:02.604751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-02 00:44:02.604763 | orchestrator | to this access issue: 2025-06-02 00:44:02.604775 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-02 00:44:02.604786 | orchestrator | directory 2025-06-02 00:44:02.604798 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:44:02.604809 | orchestrator | 2025-06-02 00:44:02.604821 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-02 00:44:02.604832 | orchestrator | Monday 02 June 2025 00:42:06 +0000 (0:00:01.025) 0:00:24.584 *********** 2025-06-02 00:44:02.604843 | orchestrator | [WARNING]: Skipped 2025-06-02 00:44:02.604855 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-02 00:44:02.604865 | orchestrator | to this access issue: 2025-06-02 00:44:02.604877 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-02 00:44:02.604887 | orchestrator | directory 2025-06-02 00:44:02.604899 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:44:02.604910 | orchestrator | 2025-06-02 00:44:02.604926 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-02 00:44:02.604938 | orchestrator | Monday 02 June 2025 00:42:07 +0000 (0:00:01.020) 0:00:25.605 *********** 2025-06-02 00:44:02.604949 | orchestrator | [WARNING]: Skipped 2025-06-02 00:44:02.604960 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-02 00:44:02.604971 | orchestrator | to this access issue: 2025-06-02 00:44:02.604983 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-02 00:44:02.604994 | orchestrator | directory 2025-06-02 00:44:02.605005 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:44:02.605016 | orchestrator | 2025-06-02 00:44:02.605034 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-02 00:44:02.605046 | orchestrator | Monday 02 June 2025 00:42:08 +0000 (0:00:00.694) 0:00:26.300 *********** 2025-06-02 00:44:02.605057 | orchestrator | [WARNING]: Skipped 2025-06-02 00:44:02.605068 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-02 00:44:02.605079 | orchestrator | to this access issue: 2025-06-02 00:44:02.605091 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-02 00:44:02.605102 | orchestrator | directory 2025-06-02 00:44:02.605114 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:44:02.605125 | orchestrator | 2025-06-02 00:44:02.605137 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-02 00:44:02.605149 | orchestrator | Monday 02 June 2025 00:42:08 +0000 (0:00:00.752) 0:00:27.053 *********** 2025-06-02 00:44:02.605160 | orchestrator | changed: [testbed-manager] 2025-06-02 00:44:02.605171 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:44:02.605183 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:44:02.605195 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:44:02.605207 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:44:02.605218 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:44:02.605229 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:44:02.605241 | orchestrator | 2025-06-02 00:44:02.605252 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-02 00:44:02.605263 | orchestrator | Monday 02 June 2025 00:42:13 +0000 (0:00:04.335) 0:00:31.388 *********** 2025-06-02 00:44:02.605275 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 00:44:02.605286 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 00:44:02.605298 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 00:44:02.605309 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 00:44:02.605326 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 00:44:02.605338 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 00:44:02.605349 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 00:44:02.605360 | orchestrator | 2025-06-02 00:44:02.605371 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-02 00:44:02.605382 | orchestrator | Monday 02 June 2025 00:42:16 +0000 (0:00:02.935) 0:00:34.324 *********** 2025-06-02 00:44:02.605394 | orchestrator | changed: [testbed-manager] 2025-06-02 00:44:02.605405 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:44:02.605416 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:44:02.605428 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:44:02.605439 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:44:02.605451 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:44:02.605462 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:44:02.605518 | orchestrator | 2025-06-02 00:44:02.605533 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-02 00:44:02.605545 | orchestrator | Monday 02 June 2025 00:42:18 +0000 (0:00:02.610) 0:00:36.935 *********** 2025-06-02 00:44:02.605557 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.605569 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.605588 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.605618 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.605631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.605650 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.605661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.605672 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.605683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.605700 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.605718 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.605730 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.605752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.605764 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.605775 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.605787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.605799 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.605815 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.605839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:44:02.605851 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.605869 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.605881 | orchestrator | 2025-06-02 00:44:02.605892 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-02 00:44:02.605904 | orchestrator | Monday 02 June 2025 00:42:21 +0000 (0:00:02.372) 0:00:39.308 *********** 2025-06-02 00:44:02.605915 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 00:44:02.605926 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 00:44:02.605938 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 00:44:02.605948 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 00:44:02.605959 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 00:44:02.605970 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 00:44:02.605980 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 00:44:02.605991 | orchestrator | 2025-06-02 00:44:02.606003 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-02 00:44:02.606014 | orchestrator | Monday 02 June 2025 00:42:23 +0000 (0:00:02.828) 0:00:42.136 *********** 2025-06-02 00:44:02.606066 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 00:44:02.606078 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 00:44:02.606089 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 00:44:02.606100 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 00:44:02.606111 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 00:44:02.606121 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 00:44:02.606132 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 00:44:02.606143 | orchestrator | 2025-06-02 00:44:02.606155 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-02 00:44:02.606166 | orchestrator | Monday 02 June 2025 00:42:25 +0000 (0:00:01.698) 0:00:43.835 *********** 2025-06-02 00:44:02.606177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.606190 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.606218 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.606254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.606266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.606284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606326 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606338 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.606350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 00:44:02.606373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606385 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606401 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606492 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:44:02.606504 | orchestrator | 2025-06-02 00:44:02.606516 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-02 00:44:02.606527 | orchestrator | Monday 02 June 2025 00:42:29 +0000 (0:00:03.553) 0:00:47.389 *********** 2025-06-02 00:44:02.606538 | orchestrator | changed: [testbed-manager] 2025-06-02 00:44:02.606550 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:44:02.606561 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:44:02.606572 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:44:02.606583 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:44:02.606594 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:44:02.606605 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:44:02.606616 | orchestrator | 2025-06-02 00:44:02.606627 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-02 00:44:02.606638 | orchestrator | Monday 02 June 2025 00:42:31 +0000 (0:00:01.932) 0:00:49.321 *********** 2025-06-02 00:44:02.606649 | orchestrator | changed: [testbed-manager] 2025-06-02 00:44:02.606660 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:44:02.606671 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:44:02.606682 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:44:02.606700 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:44:02.606711 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:44:02.606722 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:44:02.606732 | orchestrator | 2025-06-02 00:44:02.606744 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 00:44:02.606755 | orchestrator | Monday 02 June 2025 00:42:32 +0000 (0:00:01.287) 0:00:50.608 *********** 2025-06-02 00:44:02.606766 | orchestrator | 2025-06-02 00:44:02.606777 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 00:44:02.606788 | orchestrator | Monday 02 June 2025 00:42:32 +0000 (0:00:00.080) 0:00:50.689 *********** 2025-06-02 00:44:02.606799 | orchestrator | 2025-06-02 00:44:02.606810 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 00:44:02.606821 | orchestrator | Monday 02 June 2025 00:42:32 +0000 (0:00:00.103) 0:00:50.792 *********** 2025-06-02 00:44:02.606832 | orchestrator | 2025-06-02 00:44:02.606843 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 00:44:02.606854 | orchestrator | Monday 02 June 2025 00:42:32 +0000 (0:00:00.082) 0:00:50.875 *********** 2025-06-02 00:44:02.606865 | orchestrator | 2025-06-02 00:44:02.606876 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 00:44:02.606887 | orchestrator | Monday 02 June 2025 00:42:32 +0000 (0:00:00.073) 0:00:50.948 *********** 2025-06-02 00:44:02.606898 | orchestrator | 2025-06-02 00:44:02.606916 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 00:44:02.606928 | orchestrator | Monday 02 June 2025 00:42:32 +0000 (0:00:00.178) 0:00:51.127 *********** 2025-06-02 00:44:02.606939 | orchestrator | 2025-06-02 00:44:02.606951 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 00:44:02.606962 | orchestrator | Monday 02 June 2025 00:42:32 +0000 (0:00:00.054) 0:00:51.182 *********** 2025-06-02 00:44:02.606973 | orchestrator | 2025-06-02 00:44:02.606985 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-02 00:44:02.606996 | orchestrator | Monday 02 June 2025 00:42:32 +0000 (0:00:00.075) 0:00:51.258 *********** 2025-06-02 00:44:02.607014 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:44:02.607026 | orchestrator | changed: [testbed-manager] 2025-06-02 00:44:02.607037 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:44:02.607049 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:44:02.607060 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:44:02.607072 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:44:02.607082 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:44:02.607094 | orchestrator | 2025-06-02 00:44:02.607105 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-02 00:44:02.607116 | orchestrator | Monday 02 June 2025 00:43:13 +0000 (0:00:40.957) 0:01:32.215 *********** 2025-06-02 00:44:02.607128 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:44:02.607139 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:44:02.607150 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:44:02.607161 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:44:02.607172 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:44:02.607183 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:44:02.607194 | orchestrator | changed: [testbed-manager] 2025-06-02 00:44:02.607206 | orchestrator | 2025-06-02 00:44:02.607217 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-02 00:44:02.607229 | orchestrator | Monday 02 June 2025 00:43:50 +0000 (0:00:36.199) 0:02:08.415 *********** 2025-06-02 00:44:02.607241 | orchestrator | ok: [testbed-manager] 2025-06-02 00:44:02.607252 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:44:02.607263 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:44:02.607274 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:44:02.607286 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:44:02.607297 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:44:02.607309 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:44:02.607328 | orchestrator | 2025-06-02 00:44:02.607340 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-02 00:44:02.607351 | orchestrator | Monday 02 June 2025 00:43:51 +0000 (0:00:01.857) 0:02:10.273 *********** 2025-06-02 00:44:02.607362 | orchestrator | changed: [testbed-manager] 2025-06-02 00:44:02.607373 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:44:02.607385 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:44:02.607397 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:44:02.607409 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:44:02.607420 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:44:02.607431 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:44:02.607442 | orchestrator | 2025-06-02 00:44:02.607454 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:44:02.607467 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 00:44:02.607499 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 00:44:02.607512 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 00:44:02.607523 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 00:44:02.607534 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 00:44:02.607547 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 00:44:02.607558 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 00:44:02.607570 | orchestrator | 2025-06-02 00:44:02.607582 | orchestrator | 2025-06-02 00:44:02.607593 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:44:02.607604 | orchestrator | Monday 02 June 2025 00:44:01 +0000 (0:00:09.387) 0:02:19.661 *********** 2025-06-02 00:44:02.607616 | orchestrator | =============================================================================== 2025-06-02 00:44:02.607628 | orchestrator | common : Restart fluentd container ------------------------------------- 40.96s 2025-06-02 00:44:02.607639 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 36.20s 2025-06-02 00:44:02.607650 | orchestrator | common : Restart cron container ----------------------------------------- 9.39s 2025-06-02 00:44:02.607662 | orchestrator | common : Copying over config.json files for services -------------------- 5.33s 2025-06-02 00:44:02.607673 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.29s 2025-06-02 00:44:02.607685 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.34s 2025-06-02 00:44:02.607696 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.24s 2025-06-02 00:44:02.607707 | orchestrator | common : Check common containers ---------------------------------------- 3.55s 2025-06-02 00:44:02.607723 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.94s 2025-06-02 00:44:02.607735 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.83s 2025-06-02 00:44:02.607746 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.69s 2025-06-02 00:44:02.607757 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.61s 2025-06-02 00:44:02.607768 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.37s 2025-06-02 00:44:02.607779 | orchestrator | common : Creating log volume -------------------------------------------- 1.93s 2025-06-02 00:44:02.607805 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.86s 2025-06-02 00:44:02.607817 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.70s 2025-06-02 00:44:02.607829 | orchestrator | common : include_tasks -------------------------------------------------- 1.34s 2025-06-02 00:44:02.607840 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.29s 2025-06-02 00:44:02.607851 | orchestrator | common : include_tasks -------------------------------------------------- 1.26s 2025-06-02 00:44:02.607862 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.20s 2025-06-02 00:44:02.607874 | orchestrator | 2025-06-02 00:44:02 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:02.607885 | orchestrator | 2025-06-02 00:44:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:05.632317 | orchestrator | 2025-06-02 00:44:05 | INFO  | Task f2139764-8b01-43ca-850c-eb0a55b6f401 is in state STARTED 2025-06-02 00:44:05.635555 | orchestrator | 2025-06-02 00:44:05 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:05.636001 | orchestrator | 2025-06-02 00:44:05 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:05.638003 | orchestrator | 2025-06-02 00:44:05 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:05.638570 | orchestrator | 2025-06-02 00:44:05 | INFO  | Task 831a5474-5db4-4993-94b1-8ae3eba99433 is in state STARTED 2025-06-02 00:44:05.641139 | orchestrator | 2025-06-02 00:44:05 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:05.643405 | orchestrator | 2025-06-02 00:44:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:08.676912 | orchestrator | 2025-06-02 00:44:08 | INFO  | Task f2139764-8b01-43ca-850c-eb0a55b6f401 is in state STARTED 2025-06-02 00:44:08.677693 | orchestrator | 2025-06-02 00:44:08 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:08.680916 | orchestrator | 2025-06-02 00:44:08 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:08.684664 | orchestrator | 2025-06-02 00:44:08 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:08.685393 | orchestrator | 2025-06-02 00:44:08 | INFO  | Task 831a5474-5db4-4993-94b1-8ae3eba99433 is in state STARTED 2025-06-02 00:44:08.686925 | orchestrator | 2025-06-02 00:44:08 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:08.686974 | orchestrator | 2025-06-02 00:44:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:11.722109 | orchestrator | 2025-06-02 00:44:11 | INFO  | Task f2139764-8b01-43ca-850c-eb0a55b6f401 is in state STARTED 2025-06-02 00:44:11.722207 | orchestrator | 2025-06-02 00:44:11 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:11.722224 | orchestrator | 2025-06-02 00:44:11 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:11.724648 | orchestrator | 2025-06-02 00:44:11 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:11.724678 | orchestrator | 2025-06-02 00:44:11 | INFO  | Task 831a5474-5db4-4993-94b1-8ae3eba99433 is in state STARTED 2025-06-02 00:44:11.724690 | orchestrator | 2025-06-02 00:44:11 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:11.724701 | orchestrator | 2025-06-02 00:44:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:14.770312 | orchestrator | 2025-06-02 00:44:14 | INFO  | Task f2139764-8b01-43ca-850c-eb0a55b6f401 is in state STARTED 2025-06-02 00:44:14.771987 | orchestrator | 2025-06-02 00:44:14 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:14.778871 | orchestrator | 2025-06-02 00:44:14 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:14.781239 | orchestrator | 2025-06-02 00:44:14 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:14.786951 | orchestrator | 2025-06-02 00:44:14 | INFO  | Task 831a5474-5db4-4993-94b1-8ae3eba99433 is in state STARTED 2025-06-02 00:44:14.794190 | orchestrator | 2025-06-02 00:44:14 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:14.794239 | orchestrator | 2025-06-02 00:44:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:17.821932 | orchestrator | 2025-06-02 00:44:17 | INFO  | Task f2139764-8b01-43ca-850c-eb0a55b6f401 is in state STARTED 2025-06-02 00:44:17.822307 | orchestrator | 2025-06-02 00:44:17 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:17.822613 | orchestrator | 2025-06-02 00:44:17 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:17.823276 | orchestrator | 2025-06-02 00:44:17 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:17.824000 | orchestrator | 2025-06-02 00:44:17 | INFO  | Task 831a5474-5db4-4993-94b1-8ae3eba99433 is in state STARTED 2025-06-02 00:44:17.824547 | orchestrator | 2025-06-02 00:44:17 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:17.824570 | orchestrator | 2025-06-02 00:44:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:20.870910 | orchestrator | 2025-06-02 00:44:20 | INFO  | Task f2139764-8b01-43ca-850c-eb0a55b6f401 is in state STARTED 2025-06-02 00:44:20.871002 | orchestrator | 2025-06-02 00:44:20 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:20.871016 | orchestrator | 2025-06-02 00:44:20 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:20.871028 | orchestrator | 2025-06-02 00:44:20 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:20.871039 | orchestrator | 2025-06-02 00:44:20 | INFO  | Task 831a5474-5db4-4993-94b1-8ae3eba99433 is in state STARTED 2025-06-02 00:44:20.871050 | orchestrator | 2025-06-02 00:44:20 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:20.871061 | orchestrator | 2025-06-02 00:44:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:23.910102 | orchestrator | 2025-06-02 00:44:23 | INFO  | Task f2139764-8b01-43ca-850c-eb0a55b6f401 is in state SUCCESS 2025-06-02 00:44:23.910198 | orchestrator | 2025-06-02 00:44:23 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:23.910213 | orchestrator | 2025-06-02 00:44:23 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:23.910224 | orchestrator | 2025-06-02 00:44:23 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:23.910236 | orchestrator | 2025-06-02 00:44:23 | INFO  | Task 831a5474-5db4-4993-94b1-8ae3eba99433 is in state STARTED 2025-06-02 00:44:23.912362 | orchestrator | 2025-06-02 00:44:23 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:44:23.912699 | orchestrator | 2025-06-02 00:44:23 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:23.915235 | orchestrator | 2025-06-02 00:44:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:26.942393 | orchestrator | 2025-06-02 00:44:26 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:26.942579 | orchestrator | 2025-06-02 00:44:26 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:26.942955 | orchestrator | 2025-06-02 00:44:26 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:26.943612 | orchestrator | 2025-06-02 00:44:26 | INFO  | Task 831a5474-5db4-4993-94b1-8ae3eba99433 is in state STARTED 2025-06-02 00:44:26.944092 | orchestrator | 2025-06-02 00:44:26 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:44:26.946152 | orchestrator | 2025-06-02 00:44:26 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:26.946207 | orchestrator | 2025-06-02 00:44:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:29.980239 | orchestrator | 2025-06-02 00:44:29 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:29.980328 | orchestrator | 2025-06-02 00:44:29 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:29.980344 | orchestrator | 2025-06-02 00:44:29 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:29.980598 | orchestrator | 2025-06-02 00:44:29 | INFO  | Task 831a5474-5db4-4993-94b1-8ae3eba99433 is in state STARTED 2025-06-02 00:44:29.980623 | orchestrator | 2025-06-02 00:44:29 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:44:29.981177 | orchestrator | 2025-06-02 00:44:29 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:29.981270 | orchestrator | 2025-06-02 00:44:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:33.030860 | orchestrator | 2025-06-02 00:44:33 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:33.034320 | orchestrator | 2025-06-02 00:44:33 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:33.034352 | orchestrator | 2025-06-02 00:44:33 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:33.034364 | orchestrator | 2025-06-02 00:44:33 | INFO  | Task 831a5474-5db4-4993-94b1-8ae3eba99433 is in state STARTED 2025-06-02 00:44:33.034901 | orchestrator | 2025-06-02 00:44:33 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:44:33.035532 | orchestrator | 2025-06-02 00:44:33 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:33.035603 | orchestrator | 2025-06-02 00:44:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:36.068729 | orchestrator | 2025-06-02 00:44:36 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:36.069844 | orchestrator | 2025-06-02 00:44:36 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:36.070579 | orchestrator | 2025-06-02 00:44:36 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:36.071883 | orchestrator | 2025-06-02 00:44:36 | INFO  | Task 831a5474-5db4-4993-94b1-8ae3eba99433 is in state SUCCESS 2025-06-02 00:44:36.072933 | orchestrator | 2025-06-02 00:44:36.072964 | orchestrator | 2025-06-02 00:44:36.072976 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:44:36.072988 | orchestrator | 2025-06-02 00:44:36.072999 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:44:36.073011 | orchestrator | Monday 02 June 2025 00:44:07 +0000 (0:00:00.219) 0:00:00.219 *********** 2025-06-02 00:44:36.073045 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:44:36.073060 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:44:36.073071 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:44:36.073083 | orchestrator | 2025-06-02 00:44:36.073094 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:44:36.073105 | orchestrator | Monday 02 June 2025 00:44:07 +0000 (0:00:00.226) 0:00:00.446 *********** 2025-06-02 00:44:36.073117 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-02 00:44:36.073128 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-02 00:44:36.073139 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-02 00:44:36.073150 | orchestrator | 2025-06-02 00:44:36.073161 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-02 00:44:36.073172 | orchestrator | 2025-06-02 00:44:36.073183 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-02 00:44:36.073194 | orchestrator | Monday 02 June 2025 00:44:07 +0000 (0:00:00.314) 0:00:00.761 *********** 2025-06-02 00:44:36.073204 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:44:36.073216 | orchestrator | 2025-06-02 00:44:36.073227 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-02 00:44:36.073237 | orchestrator | Monday 02 June 2025 00:44:08 +0000 (0:00:00.497) 0:00:01.258 *********** 2025-06-02 00:44:36.073249 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 00:44:36.073260 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 00:44:36.073270 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 00:44:36.073281 | orchestrator | 2025-06-02 00:44:36.073292 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-02 00:44:36.073303 | orchestrator | Monday 02 June 2025 00:44:09 +0000 (0:00:01.005) 0:00:02.264 *********** 2025-06-02 00:44:36.073314 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 00:44:36.073325 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 00:44:36.073336 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 00:44:36.073347 | orchestrator | 2025-06-02 00:44:36.073393 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-02 00:44:36.073405 | orchestrator | Monday 02 June 2025 00:44:12 +0000 (0:00:02.796) 0:00:05.060 *********** 2025-06-02 00:44:36.073416 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:44:36.073427 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:44:36.073438 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:44:36.073449 | orchestrator | 2025-06-02 00:44:36.073460 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-02 00:44:36.073471 | orchestrator | Monday 02 June 2025 00:44:14 +0000 (0:00:02.465) 0:00:07.526 *********** 2025-06-02 00:44:36.073482 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:44:36.073492 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:44:36.073503 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:44:36.073514 | orchestrator | 2025-06-02 00:44:36.073525 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:44:36.073549 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:44:36.073596 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:44:36.073611 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:44:36.073625 | orchestrator | 2025-06-02 00:44:36.073638 | orchestrator | 2025-06-02 00:44:36.073651 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:44:36.073665 | orchestrator | Monday 02 June 2025 00:44:22 +0000 (0:00:07.559) 0:00:15.085 *********** 2025-06-02 00:44:36.073687 | orchestrator | =============================================================================== 2025-06-02 00:44:36.073700 | orchestrator | memcached : Restart memcached container --------------------------------- 7.56s 2025-06-02 00:44:36.073713 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.80s 2025-06-02 00:44:36.073726 | orchestrator | memcached : Check memcached container ----------------------------------- 2.47s 2025-06-02 00:44:36.073740 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.01s 2025-06-02 00:44:36.073754 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.50s 2025-06-02 00:44:36.073767 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.31s 2025-06-02 00:44:36.073781 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.23s 2025-06-02 00:44:36.073794 | orchestrator | 2025-06-02 00:44:36.073807 | orchestrator | 2025-06-02 00:44:36.073821 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:44:36.073834 | orchestrator | 2025-06-02 00:44:36.073848 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:44:36.073861 | orchestrator | Monday 02 June 2025 00:44:08 +0000 (0:00:00.561) 0:00:00.561 *********** 2025-06-02 00:44:36.073899 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:44:36.073914 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:44:36.073927 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:44:36.073939 | orchestrator | 2025-06-02 00:44:36.073950 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:44:36.073974 | orchestrator | Monday 02 June 2025 00:44:08 +0000 (0:00:00.544) 0:00:01.106 *********** 2025-06-02 00:44:36.073986 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-02 00:44:36.073997 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-02 00:44:36.074008 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-02 00:44:36.074065 | orchestrator | 2025-06-02 00:44:36.074079 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-02 00:44:36.074090 | orchestrator | 2025-06-02 00:44:36.074102 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-02 00:44:36.074112 | orchestrator | Monday 02 June 2025 00:44:09 +0000 (0:00:00.503) 0:00:01.610 *********** 2025-06-02 00:44:36.074123 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:44:36.074134 | orchestrator | 2025-06-02 00:44:36.074145 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-02 00:44:36.074184 | orchestrator | Monday 02 June 2025 00:44:10 +0000 (0:00:00.801) 0:00:02.411 *********** 2025-06-02 00:44:36.074199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074300 | orchestrator | 2025-06-02 00:44:36.074312 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-02 00:44:36.074323 | orchestrator | Monday 02 June 2025 00:44:11 +0000 (0:00:01.503) 0:00:03.915 *********** 2025-06-02 00:44:36.074335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074502 | orchestrator | 2025-06-02 00:44:36.074514 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-02 00:44:36.074525 | orchestrator | Monday 02 June 2025 00:44:15 +0000 (0:00:03.382) 0:00:07.297 *********** 2025-06-02 00:44:36.074536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074646 | orchestrator | 2025-06-02 00:44:36.074663 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-02 00:44:36.074680 | orchestrator | Monday 02 June 2025 00:44:17 +0000 (0:00:02.652) 0:00:09.950 *********** 2025-06-02 00:44:36.074698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 00:44:36.074838 | orchestrator | 2025-06-02 00:44:36.074856 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 00:44:36.074871 | orchestrator | Monday 02 June 2025 00:44:19 +0000 (0:00:01.719) 0:00:11.670 *********** 2025-06-02 00:44:36.074883 | orchestrator | 2025-06-02 00:44:36.074901 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 00:44:36.074918 | orchestrator | Monday 02 June 2025 00:44:19 +0000 (0:00:00.087) 0:00:11.757 *********** 2025-06-02 00:44:36.074928 | orchestrator | 2025-06-02 00:44:36.074938 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 00:44:36.074947 | orchestrator | Monday 02 June 2025 00:44:19 +0000 (0:00:00.085) 0:00:11.843 *********** 2025-06-02 00:44:36.074957 | orchestrator | 2025-06-02 00:44:36.074967 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-02 00:44:36.074976 | orchestrator | Monday 02 June 2025 00:44:19 +0000 (0:00:00.069) 0:00:11.912 *********** 2025-06-02 00:44:36.074993 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:44:36.075003 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:44:36.075013 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:44:36.075023 | orchestrator | 2025-06-02 00:44:36.075033 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-02 00:44:36.075042 | orchestrator | Monday 02 June 2025 00:44:26 +0000 (0:00:07.072) 0:00:18.985 *********** 2025-06-02 00:44:36.075052 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:44:36.075062 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:44:36.075071 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:44:36.075081 | orchestrator | 2025-06-02 00:44:36.075095 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:44:36.075113 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:44:36.075127 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:44:36.075137 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:44:36.075151 | orchestrator | 2025-06-02 00:44:36.075168 | orchestrator | 2025-06-02 00:44:36.075184 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:44:36.075194 | orchestrator | Monday 02 June 2025 00:44:35 +0000 (0:00:08.908) 0:00:27.893 *********** 2025-06-02 00:44:36.075204 | orchestrator | =============================================================================== 2025-06-02 00:44:36.075214 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.91s 2025-06-02 00:44:36.075223 | orchestrator | redis : Restart redis container ----------------------------------------- 7.07s 2025-06-02 00:44:36.075233 | orchestrator | redis : Copying over default config.json files -------------------------- 3.38s 2025-06-02 00:44:36.075242 | orchestrator | redis : Copying over redis config files --------------------------------- 2.65s 2025-06-02 00:44:36.075252 | orchestrator | redis : Check redis containers ------------------------------------------ 1.72s 2025-06-02 00:44:36.075265 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.50s 2025-06-02 00:44:36.075275 | orchestrator | redis : include_tasks --------------------------------------------------- 0.80s 2025-06-02 00:44:36.075285 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2025-06-02 00:44:36.075294 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-06-02 00:44:36.075304 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2025-06-02 00:44:36.075313 | orchestrator | 2025-06-02 00:44:36 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:44:36.075323 | orchestrator | 2025-06-02 00:44:36 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:36.075333 | orchestrator | 2025-06-02 00:44:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:39.120868 | orchestrator | 2025-06-02 00:44:39 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:39.121533 | orchestrator | 2025-06-02 00:44:39 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:39.123085 | orchestrator | 2025-06-02 00:44:39 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:39.128319 | orchestrator | 2025-06-02 00:44:39 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:44:39.129315 | orchestrator | 2025-06-02 00:44:39 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:39.129393 | orchestrator | 2025-06-02 00:44:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:42.162999 | orchestrator | 2025-06-02 00:44:42 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:42.164120 | orchestrator | 2025-06-02 00:44:42 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:42.164165 | orchestrator | 2025-06-02 00:44:42 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:42.164937 | orchestrator | 2025-06-02 00:44:42 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:44:42.165678 | orchestrator | 2025-06-02 00:44:42 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:42.165937 | orchestrator | 2025-06-02 00:44:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:45.192461 | orchestrator | 2025-06-02 00:44:45 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:45.192952 | orchestrator | 2025-06-02 00:44:45 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:45.193636 | orchestrator | 2025-06-02 00:44:45 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:45.194486 | orchestrator | 2025-06-02 00:44:45 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:44:45.195403 | orchestrator | 2025-06-02 00:44:45 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:45.195430 | orchestrator | 2025-06-02 00:44:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:48.218768 | orchestrator | 2025-06-02 00:44:48 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:48.219474 | orchestrator | 2025-06-02 00:44:48 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:48.220261 | orchestrator | 2025-06-02 00:44:48 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:48.221162 | orchestrator | 2025-06-02 00:44:48 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:44:48.221961 | orchestrator | 2025-06-02 00:44:48 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:48.222262 | orchestrator | 2025-06-02 00:44:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:51.244757 | orchestrator | 2025-06-02 00:44:51 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:51.245831 | orchestrator | 2025-06-02 00:44:51 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:51.246767 | orchestrator | 2025-06-02 00:44:51 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:51.248085 | orchestrator | 2025-06-02 00:44:51 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:44:51.249083 | orchestrator | 2025-06-02 00:44:51 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:51.249442 | orchestrator | 2025-06-02 00:44:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:54.280758 | orchestrator | 2025-06-02 00:44:54 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:54.288527 | orchestrator | 2025-06-02 00:44:54 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:54.288576 | orchestrator | 2025-06-02 00:44:54 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:54.288589 | orchestrator | 2025-06-02 00:44:54 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:44:54.288628 | orchestrator | 2025-06-02 00:44:54 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:54.288640 | orchestrator | 2025-06-02 00:44:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:44:57.330318 | orchestrator | 2025-06-02 00:44:57 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:44:57.332064 | orchestrator | 2025-06-02 00:44:57 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:44:57.336623 | orchestrator | 2025-06-02 00:44:57 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:44:57.339049 | orchestrator | 2025-06-02 00:44:57 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:44:57.342660 | orchestrator | 2025-06-02 00:44:57 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:44:57.343465 | orchestrator | 2025-06-02 00:44:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:00.384964 | orchestrator | 2025-06-02 00:45:00 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:00.385070 | orchestrator | 2025-06-02 00:45:00 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:45:00.385084 | orchestrator | 2025-06-02 00:45:00 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:00.385097 | orchestrator | 2025-06-02 00:45:00 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:00.385613 | orchestrator | 2025-06-02 00:45:00 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:00.385641 | orchestrator | 2025-06-02 00:45:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:03.419492 | orchestrator | 2025-06-02 00:45:03 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:03.420428 | orchestrator | 2025-06-02 00:45:03 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:45:03.421000 | orchestrator | 2025-06-02 00:45:03 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:03.422550 | orchestrator | 2025-06-02 00:45:03 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:03.423140 | orchestrator | 2025-06-02 00:45:03 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:03.423173 | orchestrator | 2025-06-02 00:45:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:06.446876 | orchestrator | 2025-06-02 00:45:06 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:06.447322 | orchestrator | 2025-06-02 00:45:06 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state STARTED 2025-06-02 00:45:06.447937 | orchestrator | 2025-06-02 00:45:06 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:06.448461 | orchestrator | 2025-06-02 00:45:06 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:06.449178 | orchestrator | 2025-06-02 00:45:06 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:06.449199 | orchestrator | 2025-06-02 00:45:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:09.477791 | orchestrator | 2025-06-02 00:45:09 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:09.479468 | orchestrator | 2025-06-02 00:45:09 | INFO  | Task cc69ce3e-c4a9-4e3c-903d-0a81aa6256f0 is in state SUCCESS 2025-06-02 00:45:09.481724 | orchestrator | 2025-06-02 00:45:09.481795 | orchestrator | 2025-06-02 00:45:09.481808 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:45:09.481820 | orchestrator | 2025-06-02 00:45:09.481831 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:45:09.481843 | orchestrator | Monday 02 June 2025 00:44:08 +0000 (0:00:00.542) 0:00:00.542 *********** 2025-06-02 00:45:09.481859 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:45:09.481874 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:45:09.481900 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:45:09.481913 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:45:09.481924 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:45:09.481935 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:45:09.481946 | orchestrator | 2025-06-02 00:45:09.481957 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:45:09.481968 | orchestrator | Monday 02 June 2025 00:44:09 +0000 (0:00:01.260) 0:00:01.803 *********** 2025-06-02 00:45:09.481979 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 00:45:09.481990 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 00:45:09.482001 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 00:45:09.482011 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 00:45:09.482069 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 00:45:09.482081 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 00:45:09.482092 | orchestrator | 2025-06-02 00:45:09.482103 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-02 00:45:09.482114 | orchestrator | 2025-06-02 00:45:09.482124 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-02 00:45:09.482135 | orchestrator | Monday 02 June 2025 00:44:11 +0000 (0:00:01.520) 0:00:03.323 *********** 2025-06-02 00:45:09.482148 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:45:09.482160 | orchestrator | 2025-06-02 00:45:09.482171 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 00:45:09.482182 | orchestrator | Monday 02 June 2025 00:44:13 +0000 (0:00:02.176) 0:00:05.500 *********** 2025-06-02 00:45:09.482193 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 00:45:09.482205 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 00:45:09.482241 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 00:45:09.482253 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 00:45:09.482264 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 00:45:09.482274 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 00:45:09.482285 | orchestrator | 2025-06-02 00:45:09.482296 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 00:45:09.482307 | orchestrator | Monday 02 June 2025 00:44:14 +0000 (0:00:01.302) 0:00:06.802 *********** 2025-06-02 00:45:09.482319 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 00:45:09.482330 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 00:45:09.482341 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 00:45:09.482352 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 00:45:09.482363 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 00:45:09.482373 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 00:45:09.482384 | orchestrator | 2025-06-02 00:45:09.482395 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 00:45:09.482406 | orchestrator | Monday 02 June 2025 00:44:16 +0000 (0:00:01.800) 0:00:08.602 *********** 2025-06-02 00:45:09.482430 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-02 00:45:09.482442 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:45:09.482454 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-02 00:45:09.482465 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:45:09.482476 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-02 00:45:09.482487 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:45:09.482498 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-02 00:45:09.482509 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:45:09.482519 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-02 00:45:09.482530 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:45:09.482541 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-02 00:45:09.482552 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:45:09.482563 | orchestrator | 2025-06-02 00:45:09.482574 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-02 00:45:09.482585 | orchestrator | Monday 02 June 2025 00:44:17 +0000 (0:00:01.007) 0:00:09.610 *********** 2025-06-02 00:45:09.482596 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:45:09.482607 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:45:09.482618 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:45:09.482629 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:45:09.482640 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:45:09.482651 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:45:09.482662 | orchestrator | 2025-06-02 00:45:09.482673 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-02 00:45:09.482684 | orchestrator | Monday 02 June 2025 00:44:18 +0000 (0:00:00.947) 0:00:10.558 *********** 2025-06-02 00:45:09.482720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482780 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482898 | orchestrator | 2025-06-02 00:45:09.482909 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-02 00:45:09.482926 | orchestrator | Monday 02 June 2025 00:44:20 +0000 (0:00:01.648) 0:00:12.206 *********** 2025-06-02 00:45:09.482938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482980 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.482991 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483027 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483084 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483101 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483113 | orchestrator | 2025-06-02 00:45:09.483124 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-02 00:45:09.483135 | orchestrator | Monday 02 June 2025 00:44:23 +0000 (0:00:03.584) 0:00:15.790 *********** 2025-06-02 00:45:09.483146 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:45:09.483158 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:45:09.483169 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:45:09.483186 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:45:09.483197 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:45:09.483208 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:45:09.483219 | orchestrator | 2025-06-02 00:45:09.483261 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-02 00:45:09.483273 | orchestrator | Monday 02 June 2025 00:44:25 +0000 (0:00:01.420) 0:00:17.210 *********** 2025-06-02 00:45:09.483284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483325 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483491 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 00:45:09.483607 | orchestrator | 2025-06-02 00:45:09.483620 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 00:45:09.483632 | orchestrator | Monday 02 June 2025 00:44:28 +0000 (0:00:03.108) 0:00:20.319 *********** 2025-06-02 00:45:09.483643 | orchestrator | 2025-06-02 00:45:09.483654 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 00:45:09.483665 | orchestrator | Monday 02 June 2025 00:44:28 +0000 (0:00:00.281) 0:00:20.600 *********** 2025-06-02 00:45:09.483675 | orchestrator | 2025-06-02 00:45:09.483686 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 00:45:09.483697 | orchestrator | Monday 02 June 2025 00:44:28 +0000 (0:00:00.179) 0:00:20.780 *********** 2025-06-02 00:45:09.483707 | orchestrator | 2025-06-02 00:45:09.483718 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 00:45:09.483729 | orchestrator | Monday 02 June 2025 00:44:28 +0000 (0:00:00.140) 0:00:20.920 *********** 2025-06-02 00:45:09.483739 | orchestrator | 2025-06-02 00:45:09.483750 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 00:45:09.483761 | orchestrator | Monday 02 June 2025 00:44:28 +0000 (0:00:00.157) 0:00:21.078 *********** 2025-06-02 00:45:09.483771 | orchestrator | 2025-06-02 00:45:09.483782 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 00:45:09.483793 | orchestrator | Monday 02 June 2025 00:44:29 +0000 (0:00:00.247) 0:00:21.325 *********** 2025-06-02 00:45:09.483804 | orchestrator | 2025-06-02 00:45:09.483814 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-02 00:45:09.483825 | orchestrator | Monday 02 June 2025 00:44:29 +0000 (0:00:00.424) 0:00:21.750 *********** 2025-06-02 00:45:09.483836 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:45:09.483847 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:45:09.483857 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:45:09.483868 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:45:09.483879 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:45:09.483890 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:45:09.483900 | orchestrator | 2025-06-02 00:45:09.483911 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-02 00:45:09.483922 | orchestrator | Monday 02 June 2025 00:44:39 +0000 (0:00:09.640) 0:00:31.391 *********** 2025-06-02 00:45:09.483933 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:45:09.483944 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:45:09.483955 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:45:09.483966 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:45:09.483977 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:45:09.483988 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:45:09.483999 | orchestrator | 2025-06-02 00:45:09.484010 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 00:45:09.484028 | orchestrator | Monday 02 June 2025 00:44:40 +0000 (0:00:01.647) 0:00:33.038 *********** 2025-06-02 00:45:09.484039 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:45:09.484050 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:45:09.484061 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:45:09.484071 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:45:09.484082 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:45:09.484093 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:45:09.484104 | orchestrator | 2025-06-02 00:45:09.484115 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-02 00:45:09.484125 | orchestrator | Monday 02 June 2025 00:44:45 +0000 (0:00:04.966) 0:00:38.004 *********** 2025-06-02 00:45:09.484143 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-02 00:45:09.484155 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-02 00:45:09.484167 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-02 00:45:09.484183 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-02 00:45:09.484195 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-02 00:45:09.484205 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-02 00:45:09.484216 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-02 00:45:09.484278 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-02 00:45:09.484290 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-02 00:45:09.484301 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-02 00:45:09.484312 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-02 00:45:09.484323 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-02 00:45:09.484334 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 00:45:09.484345 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 00:45:09.484356 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 00:45:09.484367 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 00:45:09.484378 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 00:45:09.484389 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 00:45:09.484400 | orchestrator | 2025-06-02 00:45:09.484411 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-02 00:45:09.484422 | orchestrator | Monday 02 June 2025 00:44:53 +0000 (0:00:07.581) 0:00:45.586 *********** 2025-06-02 00:45:09.484433 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-02 00:45:09.484445 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:45:09.484456 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-02 00:45:09.484467 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:45:09.484478 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-02 00:45:09.484496 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:45:09.484508 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-02 00:45:09.484519 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-02 00:45:09.484530 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-02 00:45:09.484541 | orchestrator | 2025-06-02 00:45:09.484552 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-02 00:45:09.484563 | orchestrator | Monday 02 June 2025 00:44:55 +0000 (0:00:02.206) 0:00:47.792 *********** 2025-06-02 00:45:09.484574 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-02 00:45:09.484585 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:45:09.484596 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-02 00:45:09.484607 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:45:09.484619 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-02 00:45:09.484630 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:45:09.484641 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-02 00:45:09.484652 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-02 00:45:09.484663 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-02 00:45:09.484674 | orchestrator | 2025-06-02 00:45:09.484685 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 00:45:09.484696 | orchestrator | Monday 02 June 2025 00:44:59 +0000 (0:00:03.430) 0:00:51.223 *********** 2025-06-02 00:45:09.484707 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:45:09.484718 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:45:09.484729 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:45:09.484740 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:45:09.484751 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:45:09.484762 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:45:09.484773 | orchestrator | 2025-06-02 00:45:09.484785 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:45:09.484795 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 00:45:09.484811 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 00:45:09.484822 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 00:45:09.484836 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 00:45:09.484847 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 00:45:09.484857 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 00:45:09.484866 | orchestrator | 2025-06-02 00:45:09.484876 | orchestrator | 2025-06-02 00:45:09.484886 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:45:09.484896 | orchestrator | Monday 02 June 2025 00:45:06 +0000 (0:00:07.746) 0:00:58.970 *********** 2025-06-02 00:45:09.484906 | orchestrator | =============================================================================== 2025-06-02 00:45:09.484916 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 12.71s 2025-06-02 00:45:09.484925 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.64s 2025-06-02 00:45:09.484935 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.58s 2025-06-02 00:45:09.484945 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.58s 2025-06-02 00:45:09.484960 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.43s 2025-06-02 00:45:09.484970 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.11s 2025-06-02 00:45:09.484980 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.21s 2025-06-02 00:45:09.484989 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.18s 2025-06-02 00:45:09.484999 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.80s 2025-06-02 00:45:09.485008 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.65s 2025-06-02 00:45:09.485018 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.65s 2025-06-02 00:45:09.485028 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.52s 2025-06-02 00:45:09.485037 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.43s 2025-06-02 00:45:09.485047 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.42s 2025-06-02 00:45:09.485056 | orchestrator | module-load : Load modules ---------------------------------------------- 1.30s 2025-06-02 00:45:09.485066 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.26s 2025-06-02 00:45:09.485075 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.01s 2025-06-02 00:45:09.485085 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.95s 2025-06-02 00:45:09.485191 | orchestrator | 2025-06-02 00:45:09 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:09.485204 | orchestrator | 2025-06-02 00:45:09 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:09.487050 | orchestrator | 2025-06-02 00:45:09 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:09.488149 | orchestrator | 2025-06-02 00:45:09 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:09.488169 | orchestrator | 2025-06-02 00:45:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:12.518806 | orchestrator | 2025-06-02 00:45:12 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:12.519791 | orchestrator | 2025-06-02 00:45:12 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:12.520922 | orchestrator | 2025-06-02 00:45:12 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:12.521851 | orchestrator | 2025-06-02 00:45:12 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:12.524535 | orchestrator | 2025-06-02 00:45:12 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:12.524622 | orchestrator | 2025-06-02 00:45:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:15.565609 | orchestrator | 2025-06-02 00:45:15 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:15.565698 | orchestrator | 2025-06-02 00:45:15 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:15.566379 | orchestrator | 2025-06-02 00:45:15 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:15.567134 | orchestrator | 2025-06-02 00:45:15 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:15.567986 | orchestrator | 2025-06-02 00:45:15 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:15.568008 | orchestrator | 2025-06-02 00:45:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:18.599657 | orchestrator | 2025-06-02 00:45:18 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:18.600648 | orchestrator | 2025-06-02 00:45:18 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:18.601341 | orchestrator | 2025-06-02 00:45:18 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:18.602133 | orchestrator | 2025-06-02 00:45:18 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:18.603023 | orchestrator | 2025-06-02 00:45:18 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:18.603046 | orchestrator | 2025-06-02 00:45:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:21.628682 | orchestrator | 2025-06-02 00:45:21 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:21.628767 | orchestrator | 2025-06-02 00:45:21 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:21.630338 | orchestrator | 2025-06-02 00:45:21 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:21.631966 | orchestrator | 2025-06-02 00:45:21 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:21.633550 | orchestrator | 2025-06-02 00:45:21 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:21.633690 | orchestrator | 2025-06-02 00:45:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:24.665637 | orchestrator | 2025-06-02 00:45:24 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:24.665803 | orchestrator | 2025-06-02 00:45:24 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:24.666339 | orchestrator | 2025-06-02 00:45:24 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:24.666888 | orchestrator | 2025-06-02 00:45:24 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:24.667615 | orchestrator | 2025-06-02 00:45:24 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:24.667639 | orchestrator | 2025-06-02 00:45:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:27.710508 | orchestrator | 2025-06-02 00:45:27 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:27.712651 | orchestrator | 2025-06-02 00:45:27 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:27.715740 | orchestrator | 2025-06-02 00:45:27 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:27.717582 | orchestrator | 2025-06-02 00:45:27 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:27.719546 | orchestrator | 2025-06-02 00:45:27 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:27.719581 | orchestrator | 2025-06-02 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:30.744908 | orchestrator | 2025-06-02 00:45:30 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:30.746778 | orchestrator | 2025-06-02 00:45:30 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:30.749251 | orchestrator | 2025-06-02 00:45:30 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:30.751089 | orchestrator | 2025-06-02 00:45:30 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:30.752854 | orchestrator | 2025-06-02 00:45:30 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:30.753062 | orchestrator | 2025-06-02 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:33.795009 | orchestrator | 2025-06-02 00:45:33 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:33.796560 | orchestrator | 2025-06-02 00:45:33 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:33.798770 | orchestrator | 2025-06-02 00:45:33 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:33.801276 | orchestrator | 2025-06-02 00:45:33 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:33.803428 | orchestrator | 2025-06-02 00:45:33 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:33.804062 | orchestrator | 2025-06-02 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:36.861940 | orchestrator | 2025-06-02 00:45:36 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:36.862894 | orchestrator | 2025-06-02 00:45:36 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:36.864847 | orchestrator | 2025-06-02 00:45:36 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:36.866699 | orchestrator | 2025-06-02 00:45:36 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:36.869232 | orchestrator | 2025-06-02 00:45:36 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:36.869255 | orchestrator | 2025-06-02 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:39.930936 | orchestrator | 2025-06-02 00:45:39 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:39.931036 | orchestrator | 2025-06-02 00:45:39 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:39.933486 | orchestrator | 2025-06-02 00:45:39 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:39.936081 | orchestrator | 2025-06-02 00:45:39 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:39.937284 | orchestrator | 2025-06-02 00:45:39 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:39.937459 | orchestrator | 2025-06-02 00:45:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:42.973272 | orchestrator | 2025-06-02 00:45:42 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:42.974627 | orchestrator | 2025-06-02 00:45:42 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:42.976404 | orchestrator | 2025-06-02 00:45:42 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:42.977339 | orchestrator | 2025-06-02 00:45:42 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:42.978627 | orchestrator | 2025-06-02 00:45:42 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:42.978652 | orchestrator | 2025-06-02 00:45:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:46.017519 | orchestrator | 2025-06-02 00:45:46 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:46.020443 | orchestrator | 2025-06-02 00:45:46 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:46.022524 | orchestrator | 2025-06-02 00:45:46 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:46.024889 | orchestrator | 2025-06-02 00:45:46 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:46.025940 | orchestrator | 2025-06-02 00:45:46 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:46.025967 | orchestrator | 2025-06-02 00:45:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:49.076649 | orchestrator | 2025-06-02 00:45:49 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:49.079749 | orchestrator | 2025-06-02 00:45:49 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:49.081774 | orchestrator | 2025-06-02 00:45:49 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:49.083531 | orchestrator | 2025-06-02 00:45:49 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:49.084451 | orchestrator | 2025-06-02 00:45:49 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:49.084555 | orchestrator | 2025-06-02 00:45:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:52.169498 | orchestrator | 2025-06-02 00:45:52 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:52.169617 | orchestrator | 2025-06-02 00:45:52 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:52.170597 | orchestrator | 2025-06-02 00:45:52 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:52.171354 | orchestrator | 2025-06-02 00:45:52 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:52.172281 | orchestrator | 2025-06-02 00:45:52 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:52.172398 | orchestrator | 2025-06-02 00:45:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:55.298568 | orchestrator | 2025-06-02 00:45:55 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:55.300854 | orchestrator | 2025-06-02 00:45:55 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:55.301746 | orchestrator | 2025-06-02 00:45:55 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:55.302285 | orchestrator | 2025-06-02 00:45:55 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:55.304718 | orchestrator | 2025-06-02 00:45:55 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:55.305277 | orchestrator | 2025-06-02 00:45:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:45:58.433806 | orchestrator | 2025-06-02 00:45:58 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:45:58.433930 | orchestrator | 2025-06-02 00:45:58 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:45:58.436022 | orchestrator | 2025-06-02 00:45:58 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:45:58.437598 | orchestrator | 2025-06-02 00:45:58 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:45:58.438902 | orchestrator | 2025-06-02 00:45:58 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:45:58.438927 | orchestrator | 2025-06-02 00:45:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:01.489989 | orchestrator | 2025-06-02 00:46:01 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:01.490217 | orchestrator | 2025-06-02 00:46:01 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:01.490260 | orchestrator | 2025-06-02 00:46:01 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:01.490273 | orchestrator | 2025-06-02 00:46:01 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:01.491161 | orchestrator | 2025-06-02 00:46:01 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:01.491187 | orchestrator | 2025-06-02 00:46:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:04.536785 | orchestrator | 2025-06-02 00:46:04 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:04.537593 | orchestrator | 2025-06-02 00:46:04 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:04.539516 | orchestrator | 2025-06-02 00:46:04 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:04.540580 | orchestrator | 2025-06-02 00:46:04 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:04.543318 | orchestrator | 2025-06-02 00:46:04 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:04.543337 | orchestrator | 2025-06-02 00:46:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:07.584972 | orchestrator | 2025-06-02 00:46:07 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:07.587215 | orchestrator | 2025-06-02 00:46:07 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:07.589623 | orchestrator | 2025-06-02 00:46:07 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:07.591695 | orchestrator | 2025-06-02 00:46:07 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:07.592923 | orchestrator | 2025-06-02 00:46:07 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:07.592947 | orchestrator | 2025-06-02 00:46:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:10.638425 | orchestrator | 2025-06-02 00:46:10 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:10.642314 | orchestrator | 2025-06-02 00:46:10 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:10.643165 | orchestrator | 2025-06-02 00:46:10 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:10.644098 | orchestrator | 2025-06-02 00:46:10 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:10.648146 | orchestrator | 2025-06-02 00:46:10 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:10.648218 | orchestrator | 2025-06-02 00:46:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:13.689446 | orchestrator | 2025-06-02 00:46:13 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:13.691137 | orchestrator | 2025-06-02 00:46:13 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:13.693123 | orchestrator | 2025-06-02 00:46:13 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:13.695601 | orchestrator | 2025-06-02 00:46:13 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:13.697199 | orchestrator | 2025-06-02 00:46:13 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:13.697334 | orchestrator | 2025-06-02 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:16.728209 | orchestrator | 2025-06-02 00:46:16 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:16.730192 | orchestrator | 2025-06-02 00:46:16 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:16.731555 | orchestrator | 2025-06-02 00:46:16 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:16.733275 | orchestrator | 2025-06-02 00:46:16 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:16.734752 | orchestrator | 2025-06-02 00:46:16 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:16.734872 | orchestrator | 2025-06-02 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:19.779357 | orchestrator | 2025-06-02 00:46:19 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:19.780434 | orchestrator | 2025-06-02 00:46:19 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:19.783461 | orchestrator | 2025-06-02 00:46:19 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:19.784504 | orchestrator | 2025-06-02 00:46:19 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:19.786153 | orchestrator | 2025-06-02 00:46:19 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:19.786238 | orchestrator | 2025-06-02 00:46:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:22.826072 | orchestrator | 2025-06-02 00:46:22 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:22.826799 | orchestrator | 2025-06-02 00:46:22 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:22.828288 | orchestrator | 2025-06-02 00:46:22 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:22.829211 | orchestrator | 2025-06-02 00:46:22 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:22.830676 | orchestrator | 2025-06-02 00:46:22 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:22.830703 | orchestrator | 2025-06-02 00:46:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:25.870255 | orchestrator | 2025-06-02 00:46:25 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:25.870492 | orchestrator | 2025-06-02 00:46:25 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:25.874994 | orchestrator | 2025-06-02 00:46:25 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:25.875976 | orchestrator | 2025-06-02 00:46:25 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:25.881163 | orchestrator | 2025-06-02 00:46:25 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:25.881189 | orchestrator | 2025-06-02 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:28.902678 | orchestrator | 2025-06-02 00:46:28 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:28.903077 | orchestrator | 2025-06-02 00:46:28 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:28.903751 | orchestrator | 2025-06-02 00:46:28 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:28.904447 | orchestrator | 2025-06-02 00:46:28 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:28.905271 | orchestrator | 2025-06-02 00:46:28 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:28.905353 | orchestrator | 2025-06-02 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:31.945497 | orchestrator | 2025-06-02 00:46:31 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:31.946130 | orchestrator | 2025-06-02 00:46:31 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:31.946169 | orchestrator | 2025-06-02 00:46:31 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:31.948012 | orchestrator | 2025-06-02 00:46:31 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:31.950589 | orchestrator | 2025-06-02 00:46:31 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:31.951083 | orchestrator | 2025-06-02 00:46:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:34.993395 | orchestrator | 2025-06-02 00:46:34 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:34.993482 | orchestrator | 2025-06-02 00:46:34 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:34.993496 | orchestrator | 2025-06-02 00:46:34 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:34.993508 | orchestrator | 2025-06-02 00:46:34 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:34.994513 | orchestrator | 2025-06-02 00:46:34 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:34.994545 | orchestrator | 2025-06-02 00:46:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:38.035115 | orchestrator | 2025-06-02 00:46:38 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:38.035328 | orchestrator | 2025-06-02 00:46:38 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:38.035926 | orchestrator | 2025-06-02 00:46:38 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state STARTED 2025-06-02 00:46:38.036466 | orchestrator | 2025-06-02 00:46:38 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:38.037631 | orchestrator | 2025-06-02 00:46:38 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:38.037656 | orchestrator | 2025-06-02 00:46:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:41.067164 | orchestrator | 2025-06-02 00:46:41 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:41.067998 | orchestrator | 2025-06-02 00:46:41 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:41.069146 | orchestrator | 2025-06-02 00:46:41 | INFO  | Task 816a6631-8bea-46e6-bf17-ff2fff8e77a3 is in state SUCCESS 2025-06-02 00:46:41.070380 | orchestrator | 2025-06-02 00:46:41.070414 | orchestrator | 2025-06-02 00:46:41.070428 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-02 00:46:41.070442 | orchestrator | 2025-06-02 00:46:41.070455 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 00:46:41.070524 | orchestrator | Monday 02 June 2025 00:44:28 +0000 (0:00:00.074) 0:00:00.074 *********** 2025-06-02 00:46:41.070540 | orchestrator | ok: [localhost] => { 2025-06-02 00:46:41.070562 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-02 00:46:41.070583 | orchestrator | } 2025-06-02 00:46:41.070605 | orchestrator | 2025-06-02 00:46:41.070647 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-02 00:46:41.070660 | orchestrator | Monday 02 June 2025 00:44:28 +0000 (0:00:00.030) 0:00:00.104 *********** 2025-06-02 00:46:41.070671 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-02 00:46:41.070684 | orchestrator | ...ignoring 2025-06-02 00:46:41.070695 | orchestrator | 2025-06-02 00:46:41.070706 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-02 00:46:41.070717 | orchestrator | Monday 02 June 2025 00:44:32 +0000 (0:00:03.054) 0:00:03.158 *********** 2025-06-02 00:46:41.070728 | orchestrator | skipping: [localhost] 2025-06-02 00:46:41.070739 | orchestrator | 2025-06-02 00:46:41.070750 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-02 00:46:41.070761 | orchestrator | Monday 02 June 2025 00:44:32 +0000 (0:00:00.119) 0:00:03.278 *********** 2025-06-02 00:46:41.070772 | orchestrator | ok: [localhost] 2025-06-02 00:46:41.070783 | orchestrator | 2025-06-02 00:46:41.070794 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:46:41.070806 | orchestrator | 2025-06-02 00:46:41.070817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:46:41.070841 | orchestrator | Monday 02 June 2025 00:44:32 +0000 (0:00:00.203) 0:00:03.482 *********** 2025-06-02 00:46:41.070852 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:41.070863 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:41.070897 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:41.070909 | orchestrator | 2025-06-02 00:46:41.070920 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:46:41.070931 | orchestrator | Monday 02 June 2025 00:44:32 +0000 (0:00:00.354) 0:00:03.836 *********** 2025-06-02 00:46:41.070942 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-02 00:46:41.070953 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-02 00:46:41.070964 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-02 00:46:41.070975 | orchestrator | 2025-06-02 00:46:41.070986 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-02 00:46:41.070998 | orchestrator | 2025-06-02 00:46:41.071009 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 00:46:41.071020 | orchestrator | Monday 02 June 2025 00:44:33 +0000 (0:00:00.612) 0:00:04.449 *********** 2025-06-02 00:46:41.071031 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:46:41.071042 | orchestrator | 2025-06-02 00:46:41.071053 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 00:46:41.071064 | orchestrator | Monday 02 June 2025 00:44:33 +0000 (0:00:00.574) 0:00:05.024 *********** 2025-06-02 00:46:41.071075 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:41.071086 | orchestrator | 2025-06-02 00:46:41.071097 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-02 00:46:41.071108 | orchestrator | Monday 02 June 2025 00:44:34 +0000 (0:00:00.932) 0:00:05.956 *********** 2025-06-02 00:46:41.071119 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:41.071130 | orchestrator | 2025-06-02 00:46:41.071141 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-02 00:46:41.071152 | orchestrator | Monday 02 June 2025 00:44:35 +0000 (0:00:00.501) 0:00:06.457 *********** 2025-06-02 00:46:41.071163 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:41.071174 | orchestrator | 2025-06-02 00:46:41.071184 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-02 00:46:41.071196 | orchestrator | Monday 02 June 2025 00:44:35 +0000 (0:00:00.346) 0:00:06.804 *********** 2025-06-02 00:46:41.071206 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:41.071217 | orchestrator | 2025-06-02 00:46:41.071228 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-02 00:46:41.071249 | orchestrator | Monday 02 June 2025 00:44:36 +0000 (0:00:00.356) 0:00:07.161 *********** 2025-06-02 00:46:41.071261 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:41.071272 | orchestrator | 2025-06-02 00:46:41.071283 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 00:46:41.071294 | orchestrator | Monday 02 June 2025 00:44:36 +0000 (0:00:00.427) 0:00:07.589 *********** 2025-06-02 00:46:41.071304 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:46:41.071315 | orchestrator | 2025-06-02 00:46:41.071326 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 00:46:41.071337 | orchestrator | Monday 02 June 2025 00:44:37 +0000 (0:00:00.743) 0:00:08.333 *********** 2025-06-02 00:46:41.071348 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:41.071359 | orchestrator | 2025-06-02 00:46:41.071370 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-02 00:46:41.071381 | orchestrator | Monday 02 June 2025 00:44:37 +0000 (0:00:00.750) 0:00:09.084 *********** 2025-06-02 00:46:41.071392 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:41.071403 | orchestrator | 2025-06-02 00:46:41.071414 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-02 00:46:41.071426 | orchestrator | Monday 02 June 2025 00:44:38 +0000 (0:00:00.346) 0:00:09.430 *********** 2025-06-02 00:46:41.071437 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:41.071448 | orchestrator | 2025-06-02 00:46:41.071469 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-02 00:46:41.071480 | orchestrator | Monday 02 June 2025 00:44:38 +0000 (0:00:00.392) 0:00:09.822 *********** 2025-06-02 00:46:41.071496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 00:46:41.071518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 00:46:41.071532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 00:46:41.071551 | orchestrator | 2025-06-02 00:46:41.071562 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-02 00:46:41.071574 | orchestrator | Monday 02 June 2025 00:44:39 +0000 (0:00:01.085) 0:00:10.907 *********** 2025-06-02 00:46:41.071595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 00:46:41.071613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 00:46:41.071626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 00:46:41.071644 | orchestrator | 2025-06-02 00:46:41.071656 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-02 00:46:41.071667 | orchestrator | Monday 02 June 2025 00:44:42 +0000 (0:00:02.758) 0:00:13.665 *********** 2025-06-02 00:46:41.071678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 00:46:41.071689 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 00:46:41.071700 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 00:46:41.071711 | orchestrator | 2025-06-02 00:46:41.071722 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-02 00:46:41.071733 | orchestrator | Monday 02 June 2025 00:44:44 +0000 (0:00:02.134) 0:00:15.800 *********** 2025-06-02 00:46:41.071743 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 00:46:41.071754 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 00:46:41.071765 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 00:46:41.071776 | orchestrator | 2025-06-02 00:46:41.071787 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-02 00:46:41.071797 | orchestrator | Monday 02 June 2025 00:44:46 +0000 (0:00:02.017) 0:00:17.818 *********** 2025-06-02 00:46:41.071808 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 00:46:41.071819 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 00:46:41.071830 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 00:46:41.071841 | orchestrator | 2025-06-02 00:46:41.071858 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-02 00:46:41.071870 | orchestrator | Monday 02 June 2025 00:44:48 +0000 (0:00:01.500) 0:00:19.318 *********** 2025-06-02 00:46:41.071912 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 00:46:41.071924 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 00:46:41.071935 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 00:46:41.071946 | orchestrator | 2025-06-02 00:46:41.071957 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-02 00:46:41.071968 | orchestrator | Monday 02 June 2025 00:44:49 +0000 (0:00:01.663) 0:00:20.982 *********** 2025-06-02 00:46:41.071978 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 00:46:41.071989 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 00:46:41.072001 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 00:46:41.072012 | orchestrator | 2025-06-02 00:46:41.072023 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-02 00:46:41.072033 | orchestrator | Monday 02 June 2025 00:44:51 +0000 (0:00:01.357) 0:00:22.340 *********** 2025-06-02 00:46:41.072044 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 00:46:41.072055 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 00:46:41.072072 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 00:46:41.072084 | orchestrator | 2025-06-02 00:46:41.072095 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 00:46:41.072105 | orchestrator | Monday 02 June 2025 00:44:52 +0000 (0:00:01.315) 0:00:23.655 *********** 2025-06-02 00:46:41.072116 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:41.072128 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:41.072139 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:41.072150 | orchestrator | 2025-06-02 00:46:41.072816 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-02 00:46:41.072843 | orchestrator | Monday 02 June 2025 00:44:52 +0000 (0:00:00.394) 0:00:24.050 *********** 2025-06-02 00:46:41.072856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 00:46:41.072869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 00:46:41.072942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 00:46:41.072966 | orchestrator | 2025-06-02 00:46:41.072978 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-02 00:46:41.072989 | orchestrator | Monday 02 June 2025 00:44:54 +0000 (0:00:01.235) 0:00:25.286 *********** 2025-06-02 00:46:41.073000 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:41.073011 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:41.073022 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:41.073033 | orchestrator | 2025-06-02 00:46:41.073044 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-02 00:46:41.073056 | orchestrator | Monday 02 June 2025 00:44:55 +0000 (0:00:00.896) 0:00:26.182 *********** 2025-06-02 00:46:41.073066 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:41.073077 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:41.073089 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:41.073099 | orchestrator | 2025-06-02 00:46:41.073110 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-02 00:46:41.073122 | orchestrator | Monday 02 June 2025 00:45:02 +0000 (0:00:07.063) 0:00:33.246 *********** 2025-06-02 00:46:41.073132 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:41.073144 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:41.073155 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:41.073165 | orchestrator | 2025-06-02 00:46:41.073177 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 00:46:41.073188 | orchestrator | 2025-06-02 00:46:41.073198 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 00:46:41.073209 | orchestrator | Monday 02 June 2025 00:45:02 +0000 (0:00:00.378) 0:00:33.625 *********** 2025-06-02 00:46:41.073221 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:41.073232 | orchestrator | 2025-06-02 00:46:41.073243 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 00:46:41.073253 | orchestrator | Monday 02 June 2025 00:45:03 +0000 (0:00:00.591) 0:00:34.216 *********** 2025-06-02 00:46:41.073264 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:41.073275 | orchestrator | 2025-06-02 00:46:41.073287 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 00:46:41.073298 | orchestrator | Monday 02 June 2025 00:45:03 +0000 (0:00:00.166) 0:00:34.383 *********** 2025-06-02 00:46:41.073309 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:41.073320 | orchestrator | 2025-06-02 00:46:41.073331 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 00:46:41.073342 | orchestrator | Monday 02 June 2025 00:45:09 +0000 (0:00:06.529) 0:00:40.912 *********** 2025-06-02 00:46:41.073353 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:41.073363 | orchestrator | 2025-06-02 00:46:41.073374 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 00:46:41.073385 | orchestrator | 2025-06-02 00:46:41.073396 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 00:46:41.073407 | orchestrator | Monday 02 June 2025 00:45:58 +0000 (0:00:48.740) 0:01:29.652 *********** 2025-06-02 00:46:41.073418 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:41.073429 | orchestrator | 2025-06-02 00:46:41.073440 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 00:46:41.073451 | orchestrator | Monday 02 June 2025 00:45:59 +0000 (0:00:00.640) 0:01:30.293 *********** 2025-06-02 00:46:41.073462 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:41.073473 | orchestrator | 2025-06-02 00:46:41.073484 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 00:46:41.073495 | orchestrator | Monday 02 June 2025 00:45:59 +0000 (0:00:00.365) 0:01:30.658 *********** 2025-06-02 00:46:41.073506 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:41.073517 | orchestrator | 2025-06-02 00:46:41.073528 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 00:46:41.073539 | orchestrator | Monday 02 June 2025 00:46:01 +0000 (0:00:02.030) 0:01:32.689 *********** 2025-06-02 00:46:41.073556 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:41.073567 | orchestrator | 2025-06-02 00:46:41.073578 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 00:46:41.073589 | orchestrator | 2025-06-02 00:46:41.073601 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 00:46:41.073611 | orchestrator | Monday 02 June 2025 00:46:18 +0000 (0:00:16.551) 0:01:49.241 *********** 2025-06-02 00:46:41.073622 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:41.073633 | orchestrator | 2025-06-02 00:46:41.073645 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 00:46:41.073655 | orchestrator | Monday 02 June 2025 00:46:18 +0000 (0:00:00.680) 0:01:49.922 *********** 2025-06-02 00:46:41.073667 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:41.073677 | orchestrator | 2025-06-02 00:46:41.073689 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 00:46:41.073705 | orchestrator | Monday 02 June 2025 00:46:19 +0000 (0:00:00.280) 0:01:50.203 *********** 2025-06-02 00:46:41.073717 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:41.073728 | orchestrator | 2025-06-02 00:46:41.073739 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 00:46:41.073750 | orchestrator | Monday 02 June 2025 00:46:20 +0000 (0:00:01.583) 0:01:51.786 *********** 2025-06-02 00:46:41.073761 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:41.073772 | orchestrator | 2025-06-02 00:46:41.073783 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-02 00:46:41.073794 | orchestrator | 2025-06-02 00:46:41.073805 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-02 00:46:41.073816 | orchestrator | Monday 02 June 2025 00:46:34 +0000 (0:00:13.913) 0:02:05.699 *********** 2025-06-02 00:46:41.073827 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:46:41.073838 | orchestrator | 2025-06-02 00:46:41.073853 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-02 00:46:41.073864 | orchestrator | Monday 02 June 2025 00:46:35 +0000 (0:00:00.718) 0:02:06.418 *********** 2025-06-02 00:46:41.073929 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 00:46:41.073943 | orchestrator | enable_outward_rabbitmq_True 2025-06-02 00:46:41.073954 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 00:46:41.073965 | orchestrator | outward_rabbitmq_restart 2025-06-02 00:46:41.073976 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:41.073987 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:41.073999 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:41.074010 | orchestrator | 2025-06-02 00:46:41.074070 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-02 00:46:41.074082 | orchestrator | skipping: no hosts matched 2025-06-02 00:46:41.074094 | orchestrator | 2025-06-02 00:46:41.074105 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-02 00:46:41.074116 | orchestrator | skipping: no hosts matched 2025-06-02 00:46:41.074127 | orchestrator | 2025-06-02 00:46:41.074138 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-02 00:46:41.074148 | orchestrator | skipping: no hosts matched 2025-06-02 00:46:41.074159 | orchestrator | 2025-06-02 00:46:41.074171 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:46:41.074182 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 00:46:41.074193 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 00:46:41.074205 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:46:41.074224 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:46:41.074235 | orchestrator | 2025-06-02 00:46:41.074247 | orchestrator | 2025-06-02 00:46:41.074258 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:46:41.074269 | orchestrator | Monday 02 June 2025 00:46:38 +0000 (0:00:02.991) 0:02:09.409 *********** 2025-06-02 00:46:41.074279 | orchestrator | =============================================================================== 2025-06-02 00:46:41.074288 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.21s 2025-06-02 00:46:41.074298 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.14s 2025-06-02 00:46:41.074308 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.06s 2025-06-02 00:46:41.074317 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.05s 2025-06-02 00:46:41.074327 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.99s 2025-06-02 00:46:41.074337 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.76s 2025-06-02 00:46:41.074346 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.13s 2025-06-02 00:46:41.074356 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.02s 2025-06-02 00:46:41.074365 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.91s 2025-06-02 00:46:41.074375 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.66s 2025-06-02 00:46:41.074385 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.50s 2025-06-02 00:46:41.074394 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.36s 2025-06-02 00:46:41.074404 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.32s 2025-06-02 00:46:41.074414 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.24s 2025-06-02 00:46:41.074423 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.09s 2025-06-02 00:46:41.074433 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.93s 2025-06-02 00:46:41.074443 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.90s 2025-06-02 00:46:41.074452 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.81s 2025-06-02 00:46:41.074462 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.75s 2025-06-02 00:46:41.074471 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.74s 2025-06-02 00:46:41.074486 | orchestrator | 2025-06-02 00:46:41 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:41.074497 | orchestrator | 2025-06-02 00:46:41 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:41.074507 | orchestrator | 2025-06-02 00:46:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:44.102252 | orchestrator | 2025-06-02 00:46:44 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:44.102417 | orchestrator | 2025-06-02 00:46:44 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:44.104701 | orchestrator | 2025-06-02 00:46:44 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state STARTED 2025-06-02 00:46:44.106057 | orchestrator | 2025-06-02 00:46:44 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:44.108023 | orchestrator | 2025-06-02 00:46:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:47.140326 | orchestrator | 2025-06-02 00:46:47 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:47.142220 | orchestrator | 2025-06-02 00:46:47 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:47.142253 | orchestrator | 2025-06-02 00:46:47 | INFO  | Task 43df9623-14d9-41f2-bf01-569fbac80866 is in state SUCCESS 2025-06-02 00:46:47.144048 | orchestrator | 2025-06-02 00:46:47.144077 | orchestrator | 2025-06-02 00:46:47.144089 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-02 00:46:47.144100 | orchestrator | 2025-06-02 00:46:47.144111 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-02 00:46:47.144157 | orchestrator | Monday 02 June 2025 00:41:42 +0000 (0:00:00.176) 0:00:00.176 *********** 2025-06-02 00:46:47.144172 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:46:47.144185 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:46:47.144197 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:46:47.144208 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.144219 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.144230 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.144241 | orchestrator | 2025-06-02 00:46:47.144252 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-02 00:46:47.144263 | orchestrator | Monday 02 June 2025 00:41:42 +0000 (0:00:00.603) 0:00:00.779 *********** 2025-06-02 00:46:47.144274 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.144286 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.144297 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.144308 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.144319 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.144330 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.144341 | orchestrator | 2025-06-02 00:46:47.144352 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-02 00:46:47.144363 | orchestrator | Monday 02 June 2025 00:41:43 +0000 (0:00:00.607) 0:00:01.387 *********** 2025-06-02 00:46:47.144374 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.144385 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.144395 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.144406 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.144417 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.144428 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.144439 | orchestrator | 2025-06-02 00:46:47.144449 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-02 00:46:47.144460 | orchestrator | Monday 02 June 2025 00:41:44 +0000 (0:00:00.852) 0:00:02.239 *********** 2025-06-02 00:46:47.144471 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:46:47.144482 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:46:47.144492 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:46:47.144503 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.144514 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.144525 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.144535 | orchestrator | 2025-06-02 00:46:47.144546 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-02 00:46:47.144557 | orchestrator | Monday 02 June 2025 00:41:46 +0000 (0:00:01.906) 0:00:04.146 *********** 2025-06-02 00:46:47.144568 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:46:47.144578 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:46:47.144589 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:46:47.144600 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.144611 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.144621 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.144632 | orchestrator | 2025-06-02 00:46:47.144643 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-02 00:46:47.144654 | orchestrator | Monday 02 June 2025 00:41:48 +0000 (0:00:01.925) 0:00:06.071 *********** 2025-06-02 00:46:47.144665 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:46:47.144675 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:46:47.144686 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:46:47.144710 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.144722 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.144732 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.144743 | orchestrator | 2025-06-02 00:46:47.144754 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-02 00:46:47.144765 | orchestrator | Monday 02 June 2025 00:41:49 +0000 (0:00:01.223) 0:00:07.294 *********** 2025-06-02 00:46:47.144776 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.144787 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.144798 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.144808 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.144819 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.144830 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.144841 | orchestrator | 2025-06-02 00:46:47.144887 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-02 00:46:47.144898 | orchestrator | Monday 02 June 2025 00:41:49 +0000 (0:00:00.671) 0:00:07.966 *********** 2025-06-02 00:46:47.144909 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.144920 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.144931 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.144942 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.144953 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.144963 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.144974 | orchestrator | 2025-06-02 00:46:47.144985 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-02 00:46:47.144996 | orchestrator | Monday 02 June 2025 00:41:50 +0000 (0:00:00.684) 0:00:08.650 *********** 2025-06-02 00:46:47.145006 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 00:46:47.145017 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 00:46:47.145028 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.145051 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 00:46:47.145063 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 00:46:47.145074 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.145085 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 00:46:47.145096 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 00:46:47.145107 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.145118 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 00:46:47.145141 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 00:46:47.145153 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.145164 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 00:46:47.145175 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 00:46:47.145186 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.145197 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 00:46:47.145208 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 00:46:47.145219 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.145230 | orchestrator | 2025-06-02 00:46:47.145241 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-02 00:46:47.145252 | orchestrator | Monday 02 June 2025 00:41:51 +0000 (0:00:01.216) 0:00:09.867 *********** 2025-06-02 00:46:47.145263 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.145274 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.145285 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.145296 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.145307 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.145325 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.145336 | orchestrator | 2025-06-02 00:46:47.145347 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-02 00:46:47.145358 | orchestrator | Monday 02 June 2025 00:41:53 +0000 (0:00:01.254) 0:00:11.121 *********** 2025-06-02 00:46:47.145369 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:46:47.145380 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:46:47.145392 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:46:47.145403 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.145414 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.145424 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.145435 | orchestrator | 2025-06-02 00:46:47.145446 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-02 00:46:47.145457 | orchestrator | Monday 02 June 2025 00:41:53 +0000 (0:00:00.582) 0:00:11.703 *********** 2025-06-02 00:46:47.145468 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:46:47.145479 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:46:47.145490 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.145501 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:46:47.145512 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.145523 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.145534 | orchestrator | 2025-06-02 00:46:47.145545 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-02 00:46:47.145556 | orchestrator | Monday 02 June 2025 00:41:59 +0000 (0:00:05.900) 0:00:17.604 *********** 2025-06-02 00:46:47.145567 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.145578 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.145589 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.145600 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.145611 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.145622 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.145633 | orchestrator | 2025-06-02 00:46:47.145644 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-02 00:46:47.145654 | orchestrator | Monday 02 June 2025 00:42:00 +0000 (0:00:00.635) 0:00:18.239 *********** 2025-06-02 00:46:47.145665 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.145676 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.145687 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.145727 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.145739 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.145750 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.145761 | orchestrator | 2025-06-02 00:46:47.145772 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-02 00:46:47.145784 | orchestrator | Monday 02 June 2025 00:42:01 +0000 (0:00:01.656) 0:00:19.896 *********** 2025-06-02 00:46:47.145795 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.145806 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.145817 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.145828 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.145839 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.145866 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.145877 | orchestrator | 2025-06-02 00:46:47.145888 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-02 00:46:47.145899 | orchestrator | Monday 02 June 2025 00:42:02 +0000 (0:00:00.951) 0:00:20.848 *********** 2025-06-02 00:46:47.145910 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-02 00:46:47.145921 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-02 00:46:47.145932 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.145943 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-02 00:46:47.145954 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-02 00:46:47.145965 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.145983 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-02 00:46:47.145994 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-02 00:46:47.146005 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.146057 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-02 00:46:47.146077 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-02 00:46:47.146088 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.146099 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-02 00:46:47.146110 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-02 00:46:47.146121 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.146132 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-02 00:46:47.146143 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-02 00:46:47.146154 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.146165 | orchestrator | 2025-06-02 00:46:47.146177 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-02 00:46:47.146196 | orchestrator | Monday 02 June 2025 00:42:04 +0000 (0:00:01.176) 0:00:22.024 *********** 2025-06-02 00:46:47.146208 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.146219 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.146230 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.146241 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.146252 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.146263 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.146274 | orchestrator | 2025-06-02 00:46:47.146285 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-02 00:46:47.146296 | orchestrator | 2025-06-02 00:46:47.146307 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-02 00:46:47.146318 | orchestrator | Monday 02 June 2025 00:42:05 +0000 (0:00:01.386) 0:00:23.410 *********** 2025-06-02 00:46:47.146329 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.146340 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.146350 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.146361 | orchestrator | 2025-06-02 00:46:47.146372 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-02 00:46:47.146383 | orchestrator | Monday 02 June 2025 00:42:06 +0000 (0:00:01.344) 0:00:24.755 *********** 2025-06-02 00:46:47.146394 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.146405 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.146416 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.146427 | orchestrator | 2025-06-02 00:46:47.146438 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-02 00:46:47.146449 | orchestrator | Monday 02 June 2025 00:42:07 +0000 (0:00:01.200) 0:00:25.955 *********** 2025-06-02 00:46:47.146460 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.146471 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.146482 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.146492 | orchestrator | 2025-06-02 00:46:47.146503 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-02 00:46:47.146551 | orchestrator | Monday 02 June 2025 00:42:09 +0000 (0:00:01.113) 0:00:27.069 *********** 2025-06-02 00:46:47.146564 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.146575 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.146586 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.146597 | orchestrator | 2025-06-02 00:46:47.146608 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-02 00:46:47.146619 | orchestrator | Monday 02 June 2025 00:42:09 +0000 (0:00:00.864) 0:00:27.934 *********** 2025-06-02 00:46:47.146630 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.146641 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.146652 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.146663 | orchestrator | 2025-06-02 00:46:47.146674 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-02 00:46:47.146692 | orchestrator | Monday 02 June 2025 00:42:10 +0000 (0:00:00.495) 0:00:28.429 *********** 2025-06-02 00:46:47.146703 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:46:47.146714 | orchestrator | 2025-06-02 00:46:47.146725 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-02 00:46:47.146736 | orchestrator | Monday 02 June 2025 00:42:11 +0000 (0:00:00.714) 0:00:29.144 *********** 2025-06-02 00:46:47.146746 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.146757 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.146768 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.146779 | orchestrator | 2025-06-02 00:46:47.146790 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-02 00:46:47.146801 | orchestrator | Monday 02 June 2025 00:42:13 +0000 (0:00:01.942) 0:00:31.086 *********** 2025-06-02 00:46:47.146812 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.146823 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.146834 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.146845 | orchestrator | 2025-06-02 00:46:47.146896 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-02 00:46:47.146909 | orchestrator | Monday 02 June 2025 00:42:14 +0000 (0:00:00.916) 0:00:32.002 *********** 2025-06-02 00:46:47.146920 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.146931 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.146942 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.146953 | orchestrator | 2025-06-02 00:46:47.146964 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-02 00:46:47.146975 | orchestrator | Monday 02 June 2025 00:42:15 +0000 (0:00:01.046) 0:00:33.049 *********** 2025-06-02 00:46:47.146986 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.146997 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.147008 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.147019 | orchestrator | 2025-06-02 00:46:47.147030 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-02 00:46:47.147041 | orchestrator | Monday 02 June 2025 00:42:17 +0000 (0:00:02.082) 0:00:35.132 *********** 2025-06-02 00:46:47.147051 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.147062 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.147074 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.147085 | orchestrator | 2025-06-02 00:46:47.147096 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-02 00:46:47.147106 | orchestrator | Monday 02 June 2025 00:42:17 +0000 (0:00:00.311) 0:00:35.443 *********** 2025-06-02 00:46:47.147117 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.147128 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.147146 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.147157 | orchestrator | 2025-06-02 00:46:47.147168 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-02 00:46:47.147179 | orchestrator | Monday 02 June 2025 00:42:17 +0000 (0:00:00.299) 0:00:35.743 *********** 2025-06-02 00:46:47.147190 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.147201 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.147213 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.147302 | orchestrator | 2025-06-02 00:46:47.147317 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-02 00:46:47.147329 | orchestrator | Monday 02 June 2025 00:42:19 +0000 (0:00:01.627) 0:00:37.371 *********** 2025-06-02 00:46:47.147347 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 00:46:47.147360 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 00:46:47.147371 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 00:46:47.147418 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 00:46:47.147431 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 00:46:47.147443 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 00:46:47.147454 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 00:46:47.147465 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 00:46:47.147475 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 00:46:47.147518 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 00:46:47.147530 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 00:46:47.147541 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 00:46:47.147552 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 00:46:47.147563 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 00:46:47.147574 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.147585 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.147596 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.147607 | orchestrator | 2025-06-02 00:46:47.147619 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-02 00:46:47.147630 | orchestrator | Monday 02 June 2025 00:43:15 +0000 (0:00:56.083) 0:01:33.454 *********** 2025-06-02 00:46:47.147641 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.147652 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.147663 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.147674 | orchestrator | 2025-06-02 00:46:47.147685 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-02 00:46:47.147696 | orchestrator | Monday 02 June 2025 00:43:16 +0000 (0:00:00.598) 0:01:34.053 *********** 2025-06-02 00:46:47.147707 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.147718 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.147729 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.147740 | orchestrator | 2025-06-02 00:46:47.147751 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-02 00:46:47.147763 | orchestrator | Monday 02 June 2025 00:43:17 +0000 (0:00:01.069) 0:01:35.122 *********** 2025-06-02 00:46:47.147774 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.147785 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.147796 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.147807 | orchestrator | 2025-06-02 00:46:47.147905 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-02 00:46:47.147924 | orchestrator | Monday 02 June 2025 00:43:18 +0000 (0:00:01.197) 0:01:36.319 *********** 2025-06-02 00:46:47.147936 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.147947 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.147958 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.147970 | orchestrator | 2025-06-02 00:46:47.147981 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-02 00:46:47.148000 | orchestrator | Monday 02 June 2025 00:43:33 +0000 (0:00:15.394) 0:01:51.714 *********** 2025-06-02 00:46:47.148011 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.148022 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.148034 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.148045 | orchestrator | 2025-06-02 00:46:47.148056 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-02 00:46:47.148072 | orchestrator | Monday 02 June 2025 00:43:34 +0000 (0:00:00.605) 0:01:52.319 *********** 2025-06-02 00:46:47.148084 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.148095 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.148106 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.148117 | orchestrator | 2025-06-02 00:46:47.148128 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-02 00:46:47.148139 | orchestrator | Monday 02 June 2025 00:43:34 +0000 (0:00:00.601) 0:01:52.920 *********** 2025-06-02 00:46:47.148150 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.148161 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.148172 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.148183 | orchestrator | 2025-06-02 00:46:47.148195 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-02 00:46:47.148213 | orchestrator | Monday 02 June 2025 00:43:35 +0000 (0:00:00.562) 0:01:53.483 *********** 2025-06-02 00:46:47.148223 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.148233 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.148270 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.148280 | orchestrator | 2025-06-02 00:46:47.148290 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-02 00:46:47.148300 | orchestrator | Monday 02 June 2025 00:43:36 +0000 (0:00:00.819) 0:01:54.302 *********** 2025-06-02 00:46:47.148310 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.148320 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.148330 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.148339 | orchestrator | 2025-06-02 00:46:47.148349 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-02 00:46:47.148359 | orchestrator | Monday 02 June 2025 00:43:36 +0000 (0:00:00.258) 0:01:54.561 *********** 2025-06-02 00:46:47.148369 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.148378 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.148388 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.148398 | orchestrator | 2025-06-02 00:46:47.148408 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-02 00:46:47.148418 | orchestrator | Monday 02 June 2025 00:43:37 +0000 (0:00:00.620) 0:01:55.181 *********** 2025-06-02 00:46:47.148427 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.148437 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.148447 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.148456 | orchestrator | 2025-06-02 00:46:47.148466 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-02 00:46:47.148476 | orchestrator | Monday 02 June 2025 00:43:37 +0000 (0:00:00.603) 0:01:55.784 *********** 2025-06-02 00:46:47.148485 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.148495 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.148505 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.148515 | orchestrator | 2025-06-02 00:46:47.148524 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-02 00:46:47.148534 | orchestrator | Monday 02 June 2025 00:43:38 +0000 (0:00:00.992) 0:01:56.776 *********** 2025-06-02 00:46:47.148544 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:46:47.148554 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:46:47.148563 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:46:47.148573 | orchestrator | 2025-06-02 00:46:47.148583 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-02 00:46:47.148593 | orchestrator | Monday 02 June 2025 00:43:39 +0000 (0:00:00.769) 0:01:57.546 *********** 2025-06-02 00:46:47.148608 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.148618 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.148628 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.148637 | orchestrator | 2025-06-02 00:46:47.148647 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-02 00:46:47.148681 | orchestrator | Monday 02 June 2025 00:43:39 +0000 (0:00:00.251) 0:01:57.798 *********** 2025-06-02 00:46:47.148692 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.148702 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.148712 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.148722 | orchestrator | 2025-06-02 00:46:47.148732 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-02 00:46:47.148742 | orchestrator | Monday 02 June 2025 00:43:40 +0000 (0:00:00.262) 0:01:58.060 *********** 2025-06-02 00:46:47.148752 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.148762 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.148772 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.148782 | orchestrator | 2025-06-02 00:46:47.148799 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-02 00:46:47.148817 | orchestrator | Monday 02 June 2025 00:43:40 +0000 (0:00:00.820) 0:01:58.880 *********** 2025-06-02 00:46:47.148833 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.148869 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.148887 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.148901 | orchestrator | 2025-06-02 00:46:47.148911 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-02 00:46:47.148921 | orchestrator | Monday 02 June 2025 00:43:41 +0000 (0:00:00.571) 0:01:59.452 *********** 2025-06-02 00:46:47.148931 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 00:46:47.148940 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 00:46:47.148950 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 00:46:47.148960 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 00:46:47.148970 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 00:46:47.148980 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 00:46:47.148990 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 00:46:47.148999 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 00:46:47.149014 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 00:46:47.149024 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-02 00:46:47.149034 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 00:46:47.149043 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 00:46:47.149053 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-02 00:46:47.149070 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 00:46:47.149080 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 00:46:47.149090 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 00:46:47.149099 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 00:46:47.149109 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 00:46:47.149126 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 00:46:47.149135 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 00:46:47.149145 | orchestrator | 2025-06-02 00:46:47.149155 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-02 00:46:47.149164 | orchestrator | 2025-06-02 00:46:47.149174 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-02 00:46:47.149184 | orchestrator | Monday 02 June 2025 00:43:44 +0000 (0:00:02.971) 0:02:02.423 *********** 2025-06-02 00:46:47.149194 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:46:47.149203 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:46:47.149213 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:46:47.149223 | orchestrator | 2025-06-02 00:46:47.149233 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-02 00:46:47.149242 | orchestrator | Monday 02 June 2025 00:43:44 +0000 (0:00:00.474) 0:02:02.898 *********** 2025-06-02 00:46:47.149252 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:46:47.149262 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:46:47.149271 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:46:47.149281 | orchestrator | 2025-06-02 00:46:47.149291 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-02 00:46:47.149300 | orchestrator | Monday 02 June 2025 00:43:45 +0000 (0:00:00.655) 0:02:03.553 *********** 2025-06-02 00:46:47.149324 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:46:47.149335 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:46:47.149344 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:46:47.149354 | orchestrator | 2025-06-02 00:46:47.149364 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-02 00:46:47.149374 | orchestrator | Monday 02 June 2025 00:43:45 +0000 (0:00:00.296) 0:02:03.849 *********** 2025-06-02 00:46:47.149383 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:46:47.149393 | orchestrator | 2025-06-02 00:46:47.149403 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-02 00:46:47.149412 | orchestrator | Monday 02 June 2025 00:43:46 +0000 (0:00:00.619) 0:02:04.469 *********** 2025-06-02 00:46:47.149422 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.149432 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.149442 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.149451 | orchestrator | 2025-06-02 00:46:47.149461 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-02 00:46:47.149471 | orchestrator | Monday 02 June 2025 00:43:46 +0000 (0:00:00.265) 0:02:04.734 *********** 2025-06-02 00:46:47.149480 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.149490 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.149500 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.149510 | orchestrator | 2025-06-02 00:46:47.149520 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-02 00:46:47.149529 | orchestrator | Monday 02 June 2025 00:43:47 +0000 (0:00:00.267) 0:02:05.001 *********** 2025-06-02 00:46:47.149539 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.149549 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.149559 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.149569 | orchestrator | 2025-06-02 00:46:47.149578 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-02 00:46:47.149588 | orchestrator | Monday 02 June 2025 00:43:47 +0000 (0:00:00.263) 0:02:05.265 *********** 2025-06-02 00:46:47.149597 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:46:47.149607 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:46:47.149617 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:46:47.149627 | orchestrator | 2025-06-02 00:46:47.149637 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-02 00:46:47.149666 | orchestrator | Monday 02 June 2025 00:43:48 +0000 (0:00:01.355) 0:02:06.620 *********** 2025-06-02 00:46:47.149677 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:46:47.149686 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:46:47.149696 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:46:47.149706 | orchestrator | 2025-06-02 00:46:47.149716 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 00:46:47.149725 | orchestrator | 2025-06-02 00:46:47.149765 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 00:46:47.149777 | orchestrator | Monday 02 June 2025 00:43:57 +0000 (0:00:08.573) 0:02:15.193 *********** 2025-06-02 00:46:47.149786 | orchestrator | ok: [testbed-manager] 2025-06-02 00:46:47.149796 | orchestrator | 2025-06-02 00:46:47.149806 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 00:46:47.149820 | orchestrator | Monday 02 June 2025 00:43:57 +0000 (0:00:00.761) 0:02:15.955 *********** 2025-06-02 00:46:47.149830 | orchestrator | changed: [testbed-manager] 2025-06-02 00:46:47.149840 | orchestrator | 2025-06-02 00:46:47.149863 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 00:46:47.149874 | orchestrator | Monday 02 June 2025 00:43:58 +0000 (0:00:00.400) 0:02:16.356 *********** 2025-06-02 00:46:47.149884 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 00:46:47.149893 | orchestrator | 2025-06-02 00:46:47.149903 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 00:46:47.149913 | orchestrator | Monday 02 June 2025 00:43:59 +0000 (0:00:00.892) 0:02:17.248 *********** 2025-06-02 00:46:47.149923 | orchestrator | changed: [testbed-manager] 2025-06-02 00:46:47.149933 | orchestrator | 2025-06-02 00:46:47.149949 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 00:46:47.149959 | orchestrator | Monday 02 June 2025 00:44:00 +0000 (0:00:00.768) 0:02:18.017 *********** 2025-06-02 00:46:47.149969 | orchestrator | changed: [testbed-manager] 2025-06-02 00:46:47.149979 | orchestrator | 2025-06-02 00:46:47.149988 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 00:46:47.149998 | orchestrator | Monday 02 June 2025 00:44:00 +0000 (0:00:00.549) 0:02:18.567 *********** 2025-06-02 00:46:47.150008 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 00:46:47.150054 | orchestrator | 2025-06-02 00:46:47.150066 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 00:46:47.150076 | orchestrator | Monday 02 June 2025 00:44:02 +0000 (0:00:01.565) 0:02:20.132 *********** 2025-06-02 00:46:47.150085 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 00:46:47.150095 | orchestrator | 2025-06-02 00:46:47.150105 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 00:46:47.150114 | orchestrator | Monday 02 June 2025 00:44:02 +0000 (0:00:00.857) 0:02:20.990 *********** 2025-06-02 00:46:47.150124 | orchestrator | changed: [testbed-manager] 2025-06-02 00:46:47.150134 | orchestrator | 2025-06-02 00:46:47.150143 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 00:46:47.150153 | orchestrator | Monday 02 June 2025 00:44:03 +0000 (0:00:00.427) 0:02:21.417 *********** 2025-06-02 00:46:47.150162 | orchestrator | changed: [testbed-manager] 2025-06-02 00:46:47.150172 | orchestrator | 2025-06-02 00:46:47.150182 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-02 00:46:47.150191 | orchestrator | 2025-06-02 00:46:47.150201 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-02 00:46:47.150210 | orchestrator | Monday 02 June 2025 00:44:03 +0000 (0:00:00.444) 0:02:21.862 *********** 2025-06-02 00:46:47.150220 | orchestrator | ok: [testbed-manager] 2025-06-02 00:46:47.150230 | orchestrator | 2025-06-02 00:46:47.150239 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-02 00:46:47.150249 | orchestrator | Monday 02 June 2025 00:44:03 +0000 (0:00:00.125) 0:02:21.988 *********** 2025-06-02 00:46:47.150259 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 00:46:47.150276 | orchestrator | 2025-06-02 00:46:47.150286 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-02 00:46:47.150296 | orchestrator | Monday 02 June 2025 00:44:04 +0000 (0:00:00.194) 0:02:22.182 *********** 2025-06-02 00:46:47.150305 | orchestrator | ok: [testbed-manager] 2025-06-02 00:46:47.150315 | orchestrator | 2025-06-02 00:46:47.150325 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-02 00:46:47.150334 | orchestrator | Monday 02 June 2025 00:44:05 +0000 (0:00:01.105) 0:02:23.288 *********** 2025-06-02 00:46:47.150344 | orchestrator | ok: [testbed-manager] 2025-06-02 00:46:47.150354 | orchestrator | 2025-06-02 00:46:47.150363 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-02 00:46:47.150373 | orchestrator | Monday 02 June 2025 00:44:06 +0000 (0:00:01.236) 0:02:24.525 *********** 2025-06-02 00:46:47.150383 | orchestrator | changed: [testbed-manager] 2025-06-02 00:46:47.150393 | orchestrator | 2025-06-02 00:46:47.150402 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-02 00:46:47.150412 | orchestrator | Monday 02 June 2025 00:44:08 +0000 (0:00:01.710) 0:02:26.236 *********** 2025-06-02 00:46:47.150421 | orchestrator | ok: [testbed-manager] 2025-06-02 00:46:47.150431 | orchestrator | 2025-06-02 00:46:47.150441 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-02 00:46:47.150450 | orchestrator | Monday 02 June 2025 00:44:08 +0000 (0:00:00.345) 0:02:26.581 *********** 2025-06-02 00:46:47.150460 | orchestrator | changed: [testbed-manager] 2025-06-02 00:46:47.150469 | orchestrator | 2025-06-02 00:46:47.150479 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-02 00:46:47.150489 | orchestrator | Monday 02 June 2025 00:44:13 +0000 (0:00:05.213) 0:02:31.794 *********** 2025-06-02 00:46:47.150498 | orchestrator | changed: [testbed-manager] 2025-06-02 00:46:47.150508 | orchestrator | 2025-06-02 00:46:47.150518 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-02 00:46:47.150527 | orchestrator | Monday 02 June 2025 00:44:24 +0000 (0:00:10.277) 0:02:42.072 *********** 2025-06-02 00:46:47.150537 | orchestrator | ok: [testbed-manager] 2025-06-02 00:46:47.150547 | orchestrator | 2025-06-02 00:46:47.150557 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-02 00:46:47.150566 | orchestrator | 2025-06-02 00:46:47.150576 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-02 00:46:47.150585 | orchestrator | Monday 02 June 2025 00:44:24 +0000 (0:00:00.514) 0:02:42.587 *********** 2025-06-02 00:46:47.150595 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.150605 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.150615 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.150624 | orchestrator | 2025-06-02 00:46:47.150634 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-02 00:46:47.150644 | orchestrator | Monday 02 June 2025 00:44:25 +0000 (0:00:00.475) 0:02:43.062 *********** 2025-06-02 00:46:47.150653 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.150663 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.150677 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.150687 | orchestrator | 2025-06-02 00:46:47.150697 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-02 00:46:47.150706 | orchestrator | Monday 02 June 2025 00:44:25 +0000 (0:00:00.289) 0:02:43.351 *********** 2025-06-02 00:46:47.150716 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:46:47.150726 | orchestrator | 2025-06-02 00:46:47.150735 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-02 00:46:47.150745 | orchestrator | Monday 02 June 2025 00:44:25 +0000 (0:00:00.561) 0:02:43.913 *********** 2025-06-02 00:46:47.150755 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 00:46:47.150765 | orchestrator | 2025-06-02 00:46:47.150785 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-02 00:46:47.150796 | orchestrator | Monday 02 June 2025 00:44:26 +0000 (0:00:00.898) 0:02:44.812 *********** 2025-06-02 00:46:47.150805 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 00:46:47.150815 | orchestrator | 2025-06-02 00:46:47.150825 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-02 00:46:47.150835 | orchestrator | Monday 02 June 2025 00:44:27 +0000 (0:00:00.808) 0:02:45.621 *********** 2025-06-02 00:46:47.150844 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.150895 | orchestrator | 2025-06-02 00:46:47.150906 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-02 00:46:47.150916 | orchestrator | Monday 02 June 2025 00:44:28 +0000 (0:00:00.516) 0:02:46.137 *********** 2025-06-02 00:46:47.150925 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 00:46:47.150935 | orchestrator | 2025-06-02 00:46:47.150943 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-02 00:46:47.150951 | orchestrator | Monday 02 June 2025 00:44:28 +0000 (0:00:00.802) 0:02:46.940 *********** 2025-06-02 00:46:47.150959 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.150967 | orchestrator | 2025-06-02 00:46:47.150975 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-02 00:46:47.150983 | orchestrator | Monday 02 June 2025 00:44:29 +0000 (0:00:00.140) 0:02:47.081 *********** 2025-06-02 00:46:47.150991 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.150999 | orchestrator | 2025-06-02 00:46:47.151007 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-02 00:46:47.151014 | orchestrator | Monday 02 June 2025 00:44:29 +0000 (0:00:00.145) 0:02:47.226 *********** 2025-06-02 00:46:47.151022 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.151030 | orchestrator | 2025-06-02 00:46:47.151038 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-02 00:46:47.151046 | orchestrator | Monday 02 June 2025 00:44:29 +0000 (0:00:00.168) 0:02:47.395 *********** 2025-06-02 00:46:47.151054 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.151062 | orchestrator | 2025-06-02 00:46:47.151070 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-02 00:46:47.151078 | orchestrator | Monday 02 June 2025 00:44:29 +0000 (0:00:00.156) 0:02:47.552 *********** 2025-06-02 00:46:47.151085 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 00:46:47.151093 | orchestrator | 2025-06-02 00:46:47.151102 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-02 00:46:47.151109 | orchestrator | Monday 02 June 2025 00:44:33 +0000 (0:00:04.057) 0:02:51.610 *********** 2025-06-02 00:46:47.151117 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-02 00:46:47.151125 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-02 00:46:47.151134 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (29 retries left). 2025-06-02 00:46:47.151168 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-02 00:46:47.151177 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-02 00:46:47.151185 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-02 00:46:47.151192 | orchestrator | 2025-06-02 00:46:47.151200 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-02 00:46:47.151208 | orchestrator | Monday 02 June 2025 00:46:21 +0000 (0:01:48.272) 0:04:39.882 *********** 2025-06-02 00:46:47.151216 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 00:46:47.151224 | orchestrator | 2025-06-02 00:46:47.151232 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-02 00:46:47.151240 | orchestrator | Monday 02 June 2025 00:46:23 +0000 (0:00:01.230) 0:04:41.113 *********** 2025-06-02 00:46:47.151248 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 00:46:47.151261 | orchestrator | 2025-06-02 00:46:47.151269 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-02 00:46:47.151277 | orchestrator | Monday 02 June 2025 00:46:24 +0000 (0:00:01.559) 0:04:42.672 *********** 2025-06-02 00:46:47.151285 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 00:46:47.151293 | orchestrator | 2025-06-02 00:46:47.151301 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-02 00:46:47.151309 | orchestrator | Monday 02 June 2025 00:46:25 +0000 (0:00:01.291) 0:04:43.964 *********** 2025-06-02 00:46:47.151317 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.151324 | orchestrator | 2025-06-02 00:46:47.151332 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-02 00:46:47.151340 | orchestrator | Monday 02 June 2025 00:46:26 +0000 (0:00:00.194) 0:04:44.159 *********** 2025-06-02 00:46:47.151348 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-02 00:46:47.151356 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-02 00:46:47.151364 | orchestrator | 2025-06-02 00:46:47.151376 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-02 00:46:47.151384 | orchestrator | Monday 02 June 2025 00:46:28 +0000 (0:00:02.123) 0:04:46.282 *********** 2025-06-02 00:46:47.151392 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.151400 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.151408 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.151416 | orchestrator | 2025-06-02 00:46:47.151424 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-02 00:46:47.151432 | orchestrator | Monday 02 June 2025 00:46:28 +0000 (0:00:00.281) 0:04:46.564 *********** 2025-06-02 00:46:47.151439 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.151447 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.151455 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.151463 | orchestrator | 2025-06-02 00:46:47.151476 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-02 00:46:47.151485 | orchestrator | 2025-06-02 00:46:47.151492 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-02 00:46:47.151500 | orchestrator | Monday 02 June 2025 00:46:29 +0000 (0:00:00.754) 0:04:47.318 *********** 2025-06-02 00:46:47.151508 | orchestrator | ok: [testbed-manager] 2025-06-02 00:46:47.151516 | orchestrator | 2025-06-02 00:46:47.151524 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-02 00:46:47.151532 | orchestrator | Monday 02 June 2025 00:46:29 +0000 (0:00:00.121) 0:04:47.440 *********** 2025-06-02 00:46:47.151539 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 00:46:47.151547 | orchestrator | 2025-06-02 00:46:47.151555 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-02 00:46:47.151563 | orchestrator | Monday 02 June 2025 00:46:29 +0000 (0:00:00.301) 0:04:47.741 *********** 2025-06-02 00:46:47.151571 | orchestrator | changed: [testbed-manager] 2025-06-02 00:46:47.151579 | orchestrator | 2025-06-02 00:46:47.151587 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-02 00:46:47.151594 | orchestrator | 2025-06-02 00:46:47.151602 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-02 00:46:47.151610 | orchestrator | Monday 02 June 2025 00:46:35 +0000 (0:00:05.395) 0:04:53.137 *********** 2025-06-02 00:46:47.151618 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:46:47.151626 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:46:47.151634 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:46:47.151642 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:46:47.151650 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:46:47.151658 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:46:47.151666 | orchestrator | 2025-06-02 00:46:47.151673 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-02 00:46:47.151685 | orchestrator | Monday 02 June 2025 00:46:35 +0000 (0:00:00.519) 0:04:53.656 *********** 2025-06-02 00:46:47.151694 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 00:46:47.151701 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 00:46:47.151709 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 00:46:47.151717 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 00:46:47.151725 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 00:46:47.151732 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 00:46:47.151740 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 00:46:47.151748 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 00:46:47.151756 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 00:46:47.151763 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 00:46:47.151771 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 00:46:47.151779 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 00:46:47.151787 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 00:46:47.151794 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 00:46:47.151802 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 00:46:47.151810 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 00:46:47.151818 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 00:46:47.151826 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 00:46:47.151834 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 00:46:47.151841 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 00:46:47.151862 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 00:46:47.151870 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 00:46:47.151878 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 00:46:47.151886 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 00:46:47.151894 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 00:46:47.151905 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 00:46:47.151914 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 00:46:47.151921 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 00:46:47.151929 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 00:46:47.151937 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 00:46:47.151944 | orchestrator | 2025-06-02 00:46:47.151952 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-02 00:46:47.151965 | orchestrator | Monday 02 June 2025 00:46:45 +0000 (0:00:09.886) 0:05:03.542 *********** 2025-06-02 00:46:47.151974 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.151982 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.151994 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.152002 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.152010 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.152018 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.152026 | orchestrator | 2025-06-02 00:46:47.152034 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-02 00:46:47.152042 | orchestrator | Monday 02 June 2025 00:46:46 +0000 (0:00:00.523) 0:05:04.066 *********** 2025-06-02 00:46:47.152050 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:46:47.152058 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:46:47.152065 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:46:47.152073 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:46:47.152081 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:46:47.152089 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:46:47.152097 | orchestrator | 2025-06-02 00:46:47.152105 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:46:47.152113 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:46:47.152122 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-02 00:46:47.152130 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 00:46:47.152138 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 00:46:47.152146 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 00:46:47.152153 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 00:46:47.152161 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 00:46:47.152169 | orchestrator | 2025-06-02 00:46:47.152177 | orchestrator | 2025-06-02 00:46:47.152185 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:46:47.152193 | orchestrator | Monday 02 June 2025 00:46:46 +0000 (0:00:00.568) 0:05:04.635 *********** 2025-06-02 00:46:47.152201 | orchestrator | =============================================================================== 2025-06-02 00:46:47.152209 | orchestrator | k3s_server_post : Wait for Cilium resources --------------------------- 108.27s 2025-06-02 00:46:47.152217 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.08s 2025-06-02 00:46:47.152225 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 15.39s 2025-06-02 00:46:47.152232 | orchestrator | kubectl : Install required packages ------------------------------------ 10.28s 2025-06-02 00:46:47.152240 | orchestrator | Manage labels ----------------------------------------------------------- 9.89s 2025-06-02 00:46:47.152248 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.57s 2025-06-02 00:46:47.152256 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.90s 2025-06-02 00:46:47.152264 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.40s 2025-06-02 00:46:47.152272 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 5.21s 2025-06-02 00:46:47.152279 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.06s 2025-06-02 00:46:47.152287 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.97s 2025-06-02 00:46:47.152295 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.12s 2025-06-02 00:46:47.152307 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.08s 2025-06-02 00:46:47.152315 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.94s 2025-06-02 00:46:47.152323 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.93s 2025-06-02 00:46:47.152331 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.91s 2025-06-02 00:46:47.152339 | orchestrator | kubectl : Add repository gpg key ---------------------------------------- 1.71s 2025-06-02 00:46:47.152347 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.66s 2025-06-02 00:46:47.152355 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.63s 2025-06-02 00:46:47.152362 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.57s 2025-06-02 00:46:47.152370 | orchestrator | 2025-06-02 00:46:47 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:47.152378 | orchestrator | 2025-06-02 00:46:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:50.173130 | orchestrator | 2025-06-02 00:46:50 | INFO  | Task f92cf692-fca3-4946-aea7-6f1911df3aba is in state STARTED 2025-06-02 00:46:50.174574 | orchestrator | 2025-06-02 00:46:50 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:50.175049 | orchestrator | 2025-06-02 00:46:50 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:50.176970 | orchestrator | 2025-06-02 00:46:50 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:50.177492 | orchestrator | 2025-06-02 00:46:50 | INFO  | Task 0b50bea8-06be-417d-a038-4b82fef0e078 is in state STARTED 2025-06-02 00:46:50.177526 | orchestrator | 2025-06-02 00:46:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:53.211263 | orchestrator | 2025-06-02 00:46:53 | INFO  | Task f92cf692-fca3-4946-aea7-6f1911df3aba is in state STARTED 2025-06-02 00:46:53.212508 | orchestrator | 2025-06-02 00:46:53 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:53.213645 | orchestrator | 2025-06-02 00:46:53 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:53.216487 | orchestrator | 2025-06-02 00:46:53 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:53.216964 | orchestrator | 2025-06-02 00:46:53 | INFO  | Task 0b50bea8-06be-417d-a038-4b82fef0e078 is in state SUCCESS 2025-06-02 00:46:53.216990 | orchestrator | 2025-06-02 00:46:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:56.257783 | orchestrator | 2025-06-02 00:46:56 | INFO  | Task f92cf692-fca3-4946-aea7-6f1911df3aba is in state STARTED 2025-06-02 00:46:56.258415 | orchestrator | 2025-06-02 00:46:56 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:56.259031 | orchestrator | 2025-06-02 00:46:56 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:56.262102 | orchestrator | 2025-06-02 00:46:56 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:56.262217 | orchestrator | 2025-06-02 00:46:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:46:59.312026 | orchestrator | 2025-06-02 00:46:59 | INFO  | Task f92cf692-fca3-4946-aea7-6f1911df3aba is in state SUCCESS 2025-06-02 00:46:59.312550 | orchestrator | 2025-06-02 00:46:59 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:46:59.313301 | orchestrator | 2025-06-02 00:46:59 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:46:59.314328 | orchestrator | 2025-06-02 00:46:59 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:46:59.314356 | orchestrator | 2025-06-02 00:46:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:02.356262 | orchestrator | 2025-06-02 00:47:02 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:02.356366 | orchestrator | 2025-06-02 00:47:02 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:02.357936 | orchestrator | 2025-06-02 00:47:02 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:47:02.357982 | orchestrator | 2025-06-02 00:47:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:05.394281 | orchestrator | 2025-06-02 00:47:05 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:05.397150 | orchestrator | 2025-06-02 00:47:05 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:05.399286 | orchestrator | 2025-06-02 00:47:05 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:47:05.399916 | orchestrator | 2025-06-02 00:47:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:08.443729 | orchestrator | 2025-06-02 00:47:08 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:08.445314 | orchestrator | 2025-06-02 00:47:08 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:08.445355 | orchestrator | 2025-06-02 00:47:08 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:47:08.445697 | orchestrator | 2025-06-02 00:47:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:11.496495 | orchestrator | 2025-06-02 00:47:11 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:11.496611 | orchestrator | 2025-06-02 00:47:11 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:11.499160 | orchestrator | 2025-06-02 00:47:11 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:47:11.499318 | orchestrator | 2025-06-02 00:47:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:14.536619 | orchestrator | 2025-06-02 00:47:14 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:14.545030 | orchestrator | 2025-06-02 00:47:14 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:14.545121 | orchestrator | 2025-06-02 00:47:14 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:47:14.545137 | orchestrator | 2025-06-02 00:47:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:17.586805 | orchestrator | 2025-06-02 00:47:17 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:17.587377 | orchestrator | 2025-06-02 00:47:17 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:17.588510 | orchestrator | 2025-06-02 00:47:17 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:47:17.588524 | orchestrator | 2025-06-02 00:47:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:20.634452 | orchestrator | 2025-06-02 00:47:20 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:20.635776 | orchestrator | 2025-06-02 00:47:20 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:20.637826 | orchestrator | 2025-06-02 00:47:20 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:47:20.638229 | orchestrator | 2025-06-02 00:47:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:23.680330 | orchestrator | 2025-06-02 00:47:23 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:23.681880 | orchestrator | 2025-06-02 00:47:23 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:23.684122 | orchestrator | 2025-06-02 00:47:23 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:47:23.684397 | orchestrator | 2025-06-02 00:47:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:26.729761 | orchestrator | 2025-06-02 00:47:26 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:26.731902 | orchestrator | 2025-06-02 00:47:26 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:26.734675 | orchestrator | 2025-06-02 00:47:26 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:47:26.734752 | orchestrator | 2025-06-02 00:47:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:29.790753 | orchestrator | 2025-06-02 00:47:29 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:29.790870 | orchestrator | 2025-06-02 00:47:29 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:29.791208 | orchestrator | 2025-06-02 00:47:29 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:47:29.791234 | orchestrator | 2025-06-02 00:47:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:32.879523 | orchestrator | 2025-06-02 00:47:32 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:32.879805 | orchestrator | 2025-06-02 00:47:32 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:32.881032 | orchestrator | 2025-06-02 00:47:32 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:47:32.881058 | orchestrator | 2025-06-02 00:47:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:35.937294 | orchestrator | 2025-06-02 00:47:35 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:35.937381 | orchestrator | 2025-06-02 00:47:35 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:35.937707 | orchestrator | 2025-06-02 00:47:35 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state STARTED 2025-06-02 00:47:35.937842 | orchestrator | 2025-06-02 00:47:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:38.991475 | orchestrator | 2025-06-02 00:47:38 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:38.993063 | orchestrator | 2025-06-02 00:47:38 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:38.995745 | orchestrator | 2025-06-02 00:47:38 | INFO  | Task 3a599cc0-608b-4c31-8795-01d7a2c1dcd9 is in state SUCCESS 2025-06-02 00:47:38.999100 | orchestrator | 2025-06-02 00:47:38.999149 | orchestrator | 2025-06-02 00:47:38.999162 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-02 00:47:38.999174 | orchestrator | 2025-06-02 00:47:38.999186 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 00:47:38.999197 | orchestrator | Monday 02 June 2025 00:46:50 +0000 (0:00:00.133) 0:00:00.133 *********** 2025-06-02 00:47:38.999209 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 00:47:38.999220 | orchestrator | 2025-06-02 00:47:38.999311 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 00:47:38.999323 | orchestrator | Monday 02 June 2025 00:46:50 +0000 (0:00:00.703) 0:00:00.837 *********** 2025-06-02 00:47:38.999334 | orchestrator | changed: [testbed-manager] 2025-06-02 00:47:38.999347 | orchestrator | 2025-06-02 00:47:38.999359 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-02 00:47:38.999370 | orchestrator | Monday 02 June 2025 00:46:51 +0000 (0:00:01.064) 0:00:01.901 *********** 2025-06-02 00:47:38.999381 | orchestrator | changed: [testbed-manager] 2025-06-02 00:47:38.999391 | orchestrator | 2025-06-02 00:47:38.999403 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:47:38.999414 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:47:38.999427 | orchestrator | 2025-06-02 00:47:38.999437 | orchestrator | 2025-06-02 00:47:38.999448 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:47:38.999459 | orchestrator | Monday 02 June 2025 00:46:52 +0000 (0:00:00.391) 0:00:02.293 *********** 2025-06-02 00:47:38.999470 | orchestrator | =============================================================================== 2025-06-02 00:47:38.999480 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.06s 2025-06-02 00:47:38.999491 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.70s 2025-06-02 00:47:38.999503 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.39s 2025-06-02 00:47:38.999514 | orchestrator | 2025-06-02 00:47:38.999525 | orchestrator | 2025-06-02 00:47:38.999535 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 00:47:38.999546 | orchestrator | 2025-06-02 00:47:38.999557 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 00:47:38.999568 | orchestrator | Monday 02 June 2025 00:46:50 +0000 (0:00:00.169) 0:00:00.169 *********** 2025-06-02 00:47:38.999578 | orchestrator | ok: [testbed-manager] 2025-06-02 00:47:38.999590 | orchestrator | 2025-06-02 00:47:38.999601 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 00:47:38.999611 | orchestrator | Monday 02 June 2025 00:46:51 +0000 (0:00:00.538) 0:00:00.708 *********** 2025-06-02 00:47:38.999622 | orchestrator | ok: [testbed-manager] 2025-06-02 00:47:38.999633 | orchestrator | 2025-06-02 00:47:38.999644 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 00:47:38.999679 | orchestrator | Monday 02 June 2025 00:46:51 +0000 (0:00:00.445) 0:00:01.154 *********** 2025-06-02 00:47:38.999693 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 00:47:38.999706 | orchestrator | 2025-06-02 00:47:38.999719 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 00:47:38.999733 | orchestrator | Monday 02 June 2025 00:46:52 +0000 (0:00:00.572) 0:00:01.726 *********** 2025-06-02 00:47:38.999745 | orchestrator | changed: [testbed-manager] 2025-06-02 00:47:38.999758 | orchestrator | 2025-06-02 00:47:38.999771 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 00:47:38.999783 | orchestrator | Monday 02 June 2025 00:46:53 +0000 (0:00:00.927) 0:00:02.654 *********** 2025-06-02 00:47:38.999796 | orchestrator | changed: [testbed-manager] 2025-06-02 00:47:38.999808 | orchestrator | 2025-06-02 00:47:38.999820 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 00:47:38.999833 | orchestrator | Monday 02 June 2025 00:46:53 +0000 (0:00:00.464) 0:00:03.118 *********** 2025-06-02 00:47:38.999845 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 00:47:38.999858 | orchestrator | 2025-06-02 00:47:38.999870 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 00:47:38.999883 | orchestrator | Monday 02 June 2025 00:46:54 +0000 (0:00:01.280) 0:00:04.398 *********** 2025-06-02 00:47:38.999896 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 00:47:38.999908 | orchestrator | 2025-06-02 00:47:38.999929 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 00:47:38.999942 | orchestrator | Monday 02 June 2025 00:46:55 +0000 (0:00:00.675) 0:00:05.073 *********** 2025-06-02 00:47:38.999954 | orchestrator | ok: [testbed-manager] 2025-06-02 00:47:38.999967 | orchestrator | 2025-06-02 00:47:38.999980 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 00:47:38.999993 | orchestrator | Monday 02 June 2025 00:46:55 +0000 (0:00:00.329) 0:00:05.403 *********** 2025-06-02 00:47:39.000005 | orchestrator | ok: [testbed-manager] 2025-06-02 00:47:39.000019 | orchestrator | 2025-06-02 00:47:39.000030 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:47:39.000049 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:47:39.000060 | orchestrator | 2025-06-02 00:47:39.000071 | orchestrator | 2025-06-02 00:47:39.000082 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:47:39.000092 | orchestrator | Monday 02 June 2025 00:46:56 +0000 (0:00:00.277) 0:00:05.680 *********** 2025-06-02 00:47:39.000103 | orchestrator | =============================================================================== 2025-06-02 00:47:39.000114 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.28s 2025-06-02 00:47:39.000125 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.93s 2025-06-02 00:47:39.000136 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.68s 2025-06-02 00:47:39.000160 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.57s 2025-06-02 00:47:39.000172 | orchestrator | Get home directory of operator user ------------------------------------- 0.54s 2025-06-02 00:47:39.000183 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.46s 2025-06-02 00:47:39.000193 | orchestrator | Create .kube directory -------------------------------------------------- 0.45s 2025-06-02 00:47:39.000204 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.33s 2025-06-02 00:47:39.000215 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2025-06-02 00:47:39.000226 | orchestrator | 2025-06-02 00:47:39.000236 | orchestrator | 2025-06-02 00:47:39.000247 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:47:39.000258 | orchestrator | 2025-06-02 00:47:39.000268 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:47:39.000279 | orchestrator | Monday 02 June 2025 00:45:10 +0000 (0:00:00.175) 0:00:00.175 *********** 2025-06-02 00:47:39.000290 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.000301 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.000312 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.000323 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:47:39.000334 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:47:39.000345 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:47:39.000356 | orchestrator | 2025-06-02 00:47:39.000374 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:47:39.000393 | orchestrator | Monday 02 June 2025 00:45:11 +0000 (0:00:00.657) 0:00:00.833 *********** 2025-06-02 00:47:39.000410 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-02 00:47:39.000426 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-02 00:47:39.000441 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-02 00:47:39.000459 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-02 00:47:39.000476 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-02 00:47:39.000493 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-02 00:47:39.000508 | orchestrator | 2025-06-02 00:47:39.000525 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-02 00:47:39.000541 | orchestrator | 2025-06-02 00:47:39.000556 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-02 00:47:39.000584 | orchestrator | Monday 02 June 2025 00:45:12 +0000 (0:00:00.935) 0:00:01.768 *********** 2025-06-02 00:47:39.000602 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:47:39.000621 | orchestrator | 2025-06-02 00:47:39.000638 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-02 00:47:39.000723 | orchestrator | Monday 02 June 2025 00:45:14 +0000 (0:00:01.712) 0:00:03.480 *********** 2025-06-02 00:47:39.000749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.000771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.000784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.000802 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.000826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.000838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.000849 | orchestrator | 2025-06-02 00:47:39.000860 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-02 00:47:39.000871 | orchestrator | Monday 02 June 2025 00:45:15 +0000 (0:00:01.375) 0:00:04.856 *********** 2025-06-02 00:47:39.000882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.000903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.000914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.000925 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.000936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.000947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.000958 | orchestrator | 2025-06-02 00:47:39.000973 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-02 00:47:39.000985 | orchestrator | Monday 02 June 2025 00:45:16 +0000 (0:00:01.307) 0:00:06.164 *********** 2025-06-02 00:47:39.000996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001037 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001066 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001077 | orchestrator | 2025-06-02 00:47:39.001088 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-02 00:47:39.001099 | orchestrator | Monday 02 June 2025 00:45:17 +0000 (0:00:00.963) 0:00:07.127 *********** 2025-06-02 00:47:39.001110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001184 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001197 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001214 | orchestrator | 2025-06-02 00:47:39.001226 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-02 00:47:39.001235 | orchestrator | Monday 02 June 2025 00:45:19 +0000 (0:00:01.366) 0:00:08.494 *********** 2025-06-02 00:47:39.001245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001265 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.001308 | orchestrator | 2025-06-02 00:47:39.001318 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-02 00:47:39.001327 | orchestrator | Monday 02 June 2025 00:45:20 +0000 (0:00:01.223) 0:00:09.718 *********** 2025-06-02 00:47:39.001337 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:47:39.001348 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:47:39.001357 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:47:39.001367 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:47:39.001377 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:47:39.001387 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:47:39.001402 | orchestrator | 2025-06-02 00:47:39.001417 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-02 00:47:39.001427 | orchestrator | Monday 02 June 2025 00:45:22 +0000 (0:00:02.273) 0:00:11.991 *********** 2025-06-02 00:47:39.001437 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-02 00:47:39.001446 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-02 00:47:39.001456 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-02 00:47:39.001465 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-02 00:47:39.001474 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-02 00:47:39.001484 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-02 00:47:39.001493 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 00:47:39.001503 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 00:47:39.001512 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 00:47:39.001522 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 00:47:39.001531 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 00:47:39.001540 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 00:47:39.001550 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 00:47:39.001560 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 00:47:39.001570 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 00:47:39.001579 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 00:47:39.001589 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 00:47:39.001598 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 00:47:39.001608 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 00:47:39.001618 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 00:47:39.001628 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 00:47:39.001637 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 00:47:39.001647 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 00:47:39.001788 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 00:47:39.001806 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 00:47:39.001815 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 00:47:39.001825 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 00:47:39.001835 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 00:47:39.001855 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 00:47:39.001865 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 00:47:39.001885 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 00:47:39.001895 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 00:47:39.001905 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 00:47:39.001915 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 00:47:39.001925 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 00:47:39.001934 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 00:47:39.001944 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 00:47:39.001965 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 00:47:39.001975 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 00:47:39.001984 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 00:47:39.001992 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 00:47:39.001999 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 00:47:39.002008 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-02 00:47:39.002118 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-02 00:47:39.002131 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-02 00:47:39.002139 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-02 00:47:39.002147 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-02 00:47:39.002155 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-02 00:47:39.002163 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 00:47:39.002172 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 00:47:39.002180 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 00:47:39.002188 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 00:47:39.002196 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 00:47:39.002204 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 00:47:39.002211 | orchestrator | 2025-06-02 00:47:39.002220 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 00:47:39.002228 | orchestrator | Monday 02 June 2025 00:45:41 +0000 (0:00:18.432) 0:00:30.424 *********** 2025-06-02 00:47:39.002242 | orchestrator | 2025-06-02 00:47:39.002250 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 00:47:39.002258 | orchestrator | Monday 02 June 2025 00:45:41 +0000 (0:00:00.065) 0:00:30.489 *********** 2025-06-02 00:47:39.002266 | orchestrator | 2025-06-02 00:47:39.002273 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 00:47:39.002281 | orchestrator | Monday 02 June 2025 00:45:41 +0000 (0:00:00.065) 0:00:30.555 *********** 2025-06-02 00:47:39.002289 | orchestrator | 2025-06-02 00:47:39.002297 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 00:47:39.002304 | orchestrator | Monday 02 June 2025 00:45:41 +0000 (0:00:00.065) 0:00:30.620 *********** 2025-06-02 00:47:39.002312 | orchestrator | 2025-06-02 00:47:39.002320 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 00:47:39.002327 | orchestrator | Monday 02 June 2025 00:45:41 +0000 (0:00:00.067) 0:00:30.687 *********** 2025-06-02 00:47:39.002335 | orchestrator | 2025-06-02 00:47:39.002343 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 00:47:39.002351 | orchestrator | Monday 02 June 2025 00:45:41 +0000 (0:00:00.065) 0:00:30.753 *********** 2025-06-02 00:47:39.002358 | orchestrator | 2025-06-02 00:47:39.002366 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-02 00:47:39.002374 | orchestrator | Monday 02 June 2025 00:45:41 +0000 (0:00:00.064) 0:00:30.817 *********** 2025-06-02 00:47:39.002382 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:47:39.002391 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:47:39.002399 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.002411 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.002419 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:47:39.002427 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.002435 | orchestrator | 2025-06-02 00:47:39.002443 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-02 00:47:39.002451 | orchestrator | Monday 02 June 2025 00:45:43 +0000 (0:00:01.835) 0:00:32.653 *********** 2025-06-02 00:47:39.002459 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:47:39.002467 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:47:39.002475 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:47:39.002483 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:47:39.002491 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:47:39.002499 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:47:39.002507 | orchestrator | 2025-06-02 00:47:39.002514 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-02 00:47:39.002522 | orchestrator | 2025-06-02 00:47:39.002530 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 00:47:39.002543 | orchestrator | Monday 02 June 2025 00:46:17 +0000 (0:00:34.091) 0:01:06.744 *********** 2025-06-02 00:47:39.002551 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:47:39.002559 | orchestrator | 2025-06-02 00:47:39.002567 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 00:47:39.002574 | orchestrator | Monday 02 June 2025 00:46:17 +0000 (0:00:00.527) 0:01:07.271 *********** 2025-06-02 00:47:39.002582 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:47:39.002590 | orchestrator | 2025-06-02 00:47:39.002598 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-02 00:47:39.002606 | orchestrator | Monday 02 June 2025 00:46:18 +0000 (0:00:00.639) 0:01:07.911 *********** 2025-06-02 00:47:39.002614 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.002622 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.002630 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.002638 | orchestrator | 2025-06-02 00:47:39.002646 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-02 00:47:39.002682 | orchestrator | Monday 02 June 2025 00:46:19 +0000 (0:00:00.897) 0:01:08.808 *********** 2025-06-02 00:47:39.002691 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.002699 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.002707 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.002715 | orchestrator | 2025-06-02 00:47:39.002723 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-02 00:47:39.002731 | orchestrator | Monday 02 June 2025 00:46:19 +0000 (0:00:00.332) 0:01:09.141 *********** 2025-06-02 00:47:39.002738 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.002746 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.002754 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.002762 | orchestrator | 2025-06-02 00:47:39.002770 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-02 00:47:39.002777 | orchestrator | Monday 02 June 2025 00:46:20 +0000 (0:00:00.363) 0:01:09.504 *********** 2025-06-02 00:47:39.002785 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.002793 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.002801 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.002808 | orchestrator | 2025-06-02 00:47:39.002816 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-02 00:47:39.002824 | orchestrator | Monday 02 June 2025 00:46:20 +0000 (0:00:00.484) 0:01:09.989 *********** 2025-06-02 00:47:39.002831 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.002839 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.002847 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.002855 | orchestrator | 2025-06-02 00:47:39.002863 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-02 00:47:39.002870 | orchestrator | Monday 02 June 2025 00:46:21 +0000 (0:00:00.325) 0:01:10.315 *********** 2025-06-02 00:47:39.002878 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.002886 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.002894 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.002902 | orchestrator | 2025-06-02 00:47:39.002909 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-02 00:47:39.002917 | orchestrator | Monday 02 June 2025 00:46:21 +0000 (0:00:00.311) 0:01:10.626 *********** 2025-06-02 00:47:39.002925 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.002933 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.002941 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.002948 | orchestrator | 2025-06-02 00:47:39.002956 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-02 00:47:39.002964 | orchestrator | Monday 02 June 2025 00:46:21 +0000 (0:00:00.300) 0:01:10.927 *********** 2025-06-02 00:47:39.002972 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.002979 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.002987 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.002995 | orchestrator | 2025-06-02 00:47:39.003003 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-02 00:47:39.003011 | orchestrator | Monday 02 June 2025 00:46:22 +0000 (0:00:00.464) 0:01:11.391 *********** 2025-06-02 00:47:39.003018 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003026 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003034 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003042 | orchestrator | 2025-06-02 00:47:39.003050 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-02 00:47:39.003057 | orchestrator | Monday 02 June 2025 00:46:22 +0000 (0:00:00.287) 0:01:11.679 *********** 2025-06-02 00:47:39.003065 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003073 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003080 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003088 | orchestrator | 2025-06-02 00:47:39.003096 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-02 00:47:39.003104 | orchestrator | Monday 02 June 2025 00:46:22 +0000 (0:00:00.287) 0:01:11.967 *********** 2025-06-02 00:47:39.003116 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003124 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003132 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003140 | orchestrator | 2025-06-02 00:47:39.003151 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-02 00:47:39.003159 | orchestrator | Monday 02 June 2025 00:46:22 +0000 (0:00:00.276) 0:01:12.244 *********** 2025-06-02 00:47:39.003167 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003175 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003182 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003190 | orchestrator | 2025-06-02 00:47:39.003198 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-02 00:47:39.003205 | orchestrator | Monday 02 June 2025 00:46:23 +0000 (0:00:00.458) 0:01:12.702 *********** 2025-06-02 00:47:39.003213 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003221 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003229 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003237 | orchestrator | 2025-06-02 00:47:39.003245 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-02 00:47:39.003258 | orchestrator | Monday 02 June 2025 00:46:23 +0000 (0:00:00.291) 0:01:12.994 *********** 2025-06-02 00:47:39.003266 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003274 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003282 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003289 | orchestrator | 2025-06-02 00:47:39.003297 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-02 00:47:39.003305 | orchestrator | Monday 02 June 2025 00:46:23 +0000 (0:00:00.268) 0:01:13.262 *********** 2025-06-02 00:47:39.003313 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003321 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003329 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003336 | orchestrator | 2025-06-02 00:47:39.003344 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-02 00:47:39.003352 | orchestrator | Monday 02 June 2025 00:46:24 +0000 (0:00:00.282) 0:01:13.544 *********** 2025-06-02 00:47:39.003359 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003367 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003375 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003383 | orchestrator | 2025-06-02 00:47:39.003391 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-02 00:47:39.003398 | orchestrator | Monday 02 June 2025 00:46:24 +0000 (0:00:00.707) 0:01:14.252 *********** 2025-06-02 00:47:39.003406 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003414 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003422 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003430 | orchestrator | 2025-06-02 00:47:39.003438 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 00:47:39.003445 | orchestrator | Monday 02 June 2025 00:46:25 +0000 (0:00:00.640) 0:01:14.893 *********** 2025-06-02 00:47:39.003453 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:47:39.003461 | orchestrator | 2025-06-02 00:47:39.003469 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-02 00:47:39.003477 | orchestrator | Monday 02 June 2025 00:46:27 +0000 (0:00:01.491) 0:01:16.384 *********** 2025-06-02 00:47:39.003484 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.003492 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.003500 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.003508 | orchestrator | 2025-06-02 00:47:39.003516 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-02 00:47:39.003524 | orchestrator | Monday 02 June 2025 00:46:28 +0000 (0:00:00.968) 0:01:17.353 *********** 2025-06-02 00:47:39.003531 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.003539 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.003554 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.003562 | orchestrator | 2025-06-02 00:47:39.003570 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-02 00:47:39.003578 | orchestrator | Monday 02 June 2025 00:46:28 +0000 (0:00:00.383) 0:01:17.736 *********** 2025-06-02 00:47:39.003586 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003594 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003602 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003609 | orchestrator | 2025-06-02 00:47:39.003617 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-02 00:47:39.003625 | orchestrator | Monday 02 June 2025 00:46:28 +0000 (0:00:00.313) 0:01:18.050 *********** 2025-06-02 00:47:39.003633 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003641 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003649 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003687 | orchestrator | 2025-06-02 00:47:39.003696 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-02 00:47:39.003704 | orchestrator | Monday 02 June 2025 00:46:29 +0000 (0:00:00.287) 0:01:18.338 *********** 2025-06-02 00:47:39.003711 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003719 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003727 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003735 | orchestrator | 2025-06-02 00:47:39.003743 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-02 00:47:39.003751 | orchestrator | Monday 02 June 2025 00:46:29 +0000 (0:00:00.725) 0:01:19.063 *********** 2025-06-02 00:47:39.003759 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003767 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003775 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003783 | orchestrator | 2025-06-02 00:47:39.003791 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-02 00:47:39.003798 | orchestrator | Monday 02 June 2025 00:46:30 +0000 (0:00:00.327) 0:01:19.391 *********** 2025-06-02 00:47:39.003806 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003814 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003822 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003830 | orchestrator | 2025-06-02 00:47:39.003837 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-02 00:47:39.003845 | orchestrator | Monday 02 June 2025 00:46:30 +0000 (0:00:00.440) 0:01:19.831 *********** 2025-06-02 00:47:39.003853 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.003861 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.003873 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.003881 | orchestrator | 2025-06-02 00:47:39.003889 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 00:47:39.003897 | orchestrator | Monday 02 June 2025 00:46:30 +0000 (0:00:00.391) 0:01:20.223 *********** 2025-06-02 00:47:39.003907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.003922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.003931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.003946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.003956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.003965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.003973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.003981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.003989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.003997 | orchestrator | 2025-06-02 00:47:39.004005 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 00:47:39.004013 | orchestrator | Monday 02 June 2025 00:46:32 +0000 (0:00:01.379) 0:01:21.602 *********** 2025-06-02 00:47:39.004025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004107 | orchestrator | 2025-06-02 00:47:39.004115 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 00:47:39.004123 | orchestrator | Monday 02 June 2025 00:46:36 +0000 (0:00:03.868) 0:01:25.471 *********** 2025-06-02 00:47:39.004135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004219 | orchestrator | 2025-06-02 00:47:39.004227 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 00:47:39.004235 | orchestrator | Monday 02 June 2025 00:46:38 +0000 (0:00:02.491) 0:01:27.962 *********** 2025-06-02 00:47:39.004243 | orchestrator | 2025-06-02 00:47:39.004251 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 00:47:39.004259 | orchestrator | Monday 02 June 2025 00:46:38 +0000 (0:00:00.201) 0:01:28.163 *********** 2025-06-02 00:47:39.004266 | orchestrator | 2025-06-02 00:47:39.004274 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 00:47:39.004282 | orchestrator | Monday 02 June 2025 00:46:39 +0000 (0:00:00.181) 0:01:28.345 *********** 2025-06-02 00:47:39.004290 | orchestrator | 2025-06-02 00:47:39.004298 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 00:47:39.004305 | orchestrator | Monday 02 June 2025 00:46:39 +0000 (0:00:00.156) 0:01:28.501 *********** 2025-06-02 00:47:39.004319 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:47:39.004327 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:47:39.004335 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:47:39.004343 | orchestrator | 2025-06-02 00:47:39.004351 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 00:47:39.004358 | orchestrator | Monday 02 June 2025 00:46:42 +0000 (0:00:03.214) 0:01:31.715 *********** 2025-06-02 00:47:39.004366 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:47:39.004374 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:47:39.004382 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:47:39.004390 | orchestrator | 2025-06-02 00:47:39.004398 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 00:47:39.004405 | orchestrator | Monday 02 June 2025 00:46:50 +0000 (0:00:08.122) 0:01:39.838 *********** 2025-06-02 00:47:39.004413 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:47:39.004421 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:47:39.004429 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:47:39.004436 | orchestrator | 2025-06-02 00:47:39.004448 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 00:47:39.004457 | orchestrator | Monday 02 June 2025 00:46:58 +0000 (0:00:08.191) 0:01:48.029 *********** 2025-06-02 00:47:39.004464 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.004472 | orchestrator | 2025-06-02 00:47:39.004480 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 00:47:39.004488 | orchestrator | Monday 02 June 2025 00:46:58 +0000 (0:00:00.106) 0:01:48.136 *********** 2025-06-02 00:47:39.004495 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.004503 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.004511 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.004519 | orchestrator | 2025-06-02 00:47:39.004527 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 00:47:39.004534 | orchestrator | Monday 02 June 2025 00:46:59 +0000 (0:00:00.867) 0:01:49.004 *********** 2025-06-02 00:47:39.004542 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.004550 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.004558 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:47:39.004566 | orchestrator | 2025-06-02 00:47:39.004573 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 00:47:39.004581 | orchestrator | Monday 02 June 2025 00:47:00 +0000 (0:00:00.945) 0:01:49.949 *********** 2025-06-02 00:47:39.004589 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.004597 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.004605 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.004612 | orchestrator | 2025-06-02 00:47:39.004620 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 00:47:39.004628 | orchestrator | Monday 02 June 2025 00:47:01 +0000 (0:00:00.772) 0:01:50.721 *********** 2025-06-02 00:47:39.004636 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.004643 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.004670 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:47:39.004679 | orchestrator | 2025-06-02 00:47:39.004687 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 00:47:39.004695 | orchestrator | Monday 02 June 2025 00:47:02 +0000 (0:00:00.607) 0:01:51.329 *********** 2025-06-02 00:47:39.004703 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.004711 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.004719 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.004727 | orchestrator | 2025-06-02 00:47:39.004735 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 00:47:39.004743 | orchestrator | Monday 02 June 2025 00:47:02 +0000 (0:00:00.769) 0:01:52.098 *********** 2025-06-02 00:47:39.004750 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.004758 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.004766 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.004774 | orchestrator | 2025-06-02 00:47:39.004782 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-02 00:47:39.004795 | orchestrator | Monday 02 June 2025 00:47:04 +0000 (0:00:01.297) 0:01:53.395 *********** 2025-06-02 00:47:39.004802 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.004810 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.004818 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.004826 | orchestrator | 2025-06-02 00:47:39.004834 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 00:47:39.004842 | orchestrator | Monday 02 June 2025 00:47:04 +0000 (0:00:00.320) 0:01:53.715 *********** 2025-06-02 00:47:39.004850 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004858 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004866 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004902 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004916 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004925 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004933 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004941 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004949 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004962 | orchestrator | 2025-06-02 00:47:39.004970 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 00:47:39.004978 | orchestrator | Monday 02 June 2025 00:47:05 +0000 (0:00:01.380) 0:01:55.096 *********** 2025-06-02 00:47:39.004986 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.004995 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005003 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005011 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005043 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005072 | orchestrator | 2025-06-02 00:47:39.005081 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 00:47:39.005088 | orchestrator | Monday 02 June 2025 00:47:09 +0000 (0:00:03.648) 0:01:58.745 *********** 2025-06-02 00:47:39.005096 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005105 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005113 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005132 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005174 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:47:39.005182 | orchestrator | 2025-06-02 00:47:39.005190 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 00:47:39.005198 | orchestrator | Monday 02 June 2025 00:47:12 +0000 (0:00:02.664) 0:02:01.409 *********** 2025-06-02 00:47:39.005206 | orchestrator | 2025-06-02 00:47:39.005214 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 00:47:39.005222 | orchestrator | Monday 02 June 2025 00:47:12 +0000 (0:00:00.068) 0:02:01.477 *********** 2025-06-02 00:47:39.005229 | orchestrator | 2025-06-02 00:47:39.005237 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 00:47:39.005245 | orchestrator | Monday 02 June 2025 00:47:12 +0000 (0:00:00.067) 0:02:01.545 *********** 2025-06-02 00:47:39.005253 | orchestrator | 2025-06-02 00:47:39.005261 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 00:47:39.005268 | orchestrator | Monday 02 June 2025 00:47:12 +0000 (0:00:00.067) 0:02:01.612 *********** 2025-06-02 00:47:39.005276 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:47:39.005285 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:47:39.005292 | orchestrator | 2025-06-02 00:47:39.005300 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 00:47:39.005308 | orchestrator | Monday 02 June 2025 00:47:18 +0000 (0:00:06.125) 0:02:07.738 *********** 2025-06-02 00:47:39.005316 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:47:39.005324 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:47:39.005332 | orchestrator | 2025-06-02 00:47:39.005340 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 00:47:39.005347 | orchestrator | Monday 02 June 2025 00:47:24 +0000 (0:00:06.230) 0:02:13.969 *********** 2025-06-02 00:47:39.005355 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:47:39.005363 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:47:39.005371 | orchestrator | 2025-06-02 00:47:39.005379 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 00:47:39.005387 | orchestrator | Monday 02 June 2025 00:47:30 +0000 (0:00:06.091) 0:02:20.061 *********** 2025-06-02 00:47:39.005394 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:47:39.005402 | orchestrator | 2025-06-02 00:47:39.005410 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 00:47:39.005418 | orchestrator | Monday 02 June 2025 00:47:30 +0000 (0:00:00.157) 0:02:20.218 *********** 2025-06-02 00:47:39.005426 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.005434 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.005442 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.005449 | orchestrator | 2025-06-02 00:47:39.005457 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 00:47:39.005465 | orchestrator | Monday 02 June 2025 00:47:31 +0000 (0:00:01.028) 0:02:21.246 *********** 2025-06-02 00:47:39.005473 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.005481 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.005489 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:47:39.005497 | orchestrator | 2025-06-02 00:47:39.005505 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 00:47:39.005512 | orchestrator | Monday 02 June 2025 00:47:32 +0000 (0:00:00.633) 0:02:21.880 *********** 2025-06-02 00:47:39.005520 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.005528 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.005536 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.005549 | orchestrator | 2025-06-02 00:47:39.005557 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 00:47:39.005569 | orchestrator | Monday 02 June 2025 00:47:33 +0000 (0:00:00.764) 0:02:22.645 *********** 2025-06-02 00:47:39.005577 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:47:39.005585 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:47:39.005593 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:47:39.005601 | orchestrator | 2025-06-02 00:47:39.005609 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 00:47:39.005617 | orchestrator | Monday 02 June 2025 00:47:34 +0000 (0:00:00.791) 0:02:23.436 *********** 2025-06-02 00:47:39.005624 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.005632 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.005640 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.005648 | orchestrator | 2025-06-02 00:47:39.005705 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 00:47:39.005714 | orchestrator | Monday 02 June 2025 00:47:35 +0000 (0:00:00.944) 0:02:24.381 *********** 2025-06-02 00:47:39.005721 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:47:39.005729 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:47:39.005742 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:47:39.005750 | orchestrator | 2025-06-02 00:47:39.005758 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:47:39.005766 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 00:47:39.005774 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 00:47:39.005782 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 00:47:39.005790 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:47:39.005798 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:47:39.005806 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:47:39.005814 | orchestrator | 2025-06-02 00:47:39.005822 | orchestrator | 2025-06-02 00:47:39.005830 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:47:39.005838 | orchestrator | Monday 02 June 2025 00:47:36 +0000 (0:00:01.217) 0:02:25.599 *********** 2025-06-02 00:47:39.005846 | orchestrator | =============================================================================== 2025-06-02 00:47:39.005853 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.09s 2025-06-02 00:47:39.005861 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.43s 2025-06-02 00:47:39.005869 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.35s 2025-06-02 00:47:39.005877 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.28s 2025-06-02 00:47:39.005885 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.34s 2025-06-02 00:47:39.005892 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.87s 2025-06-02 00:47:39.005900 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.65s 2025-06-02 00:47:39.005908 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.66s 2025-06-02 00:47:39.005916 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.49s 2025-06-02 00:47:39.005924 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.27s 2025-06-02 00:47:39.005931 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.84s 2025-06-02 00:47:39.005945 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.71s 2025-06-02 00:47:39.005952 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.49s 2025-06-02 00:47:39.005960 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.38s 2025-06-02 00:47:39.005968 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.38s 2025-06-02 00:47:39.005976 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.38s 2025-06-02 00:47:39.005983 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.37s 2025-06-02 00:47:39.005991 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.31s 2025-06-02 00:47:39.005999 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.30s 2025-06-02 00:47:39.006007 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.22s 2025-06-02 00:47:39.006038 | orchestrator | 2025-06-02 00:47:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:42.039479 | orchestrator | 2025-06-02 00:47:42 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:42.040822 | orchestrator | 2025-06-02 00:47:42 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:42.040876 | orchestrator | 2025-06-02 00:47:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:45.075997 | orchestrator | 2025-06-02 00:47:45 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:45.076125 | orchestrator | 2025-06-02 00:47:45 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:45.076166 | orchestrator | 2025-06-02 00:47:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:48.123113 | orchestrator | 2025-06-02 00:47:48 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:48.124394 | orchestrator | 2025-06-02 00:47:48 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:48.124659 | orchestrator | 2025-06-02 00:47:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:51.178322 | orchestrator | 2025-06-02 00:47:51 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:51.180049 | orchestrator | 2025-06-02 00:47:51 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:51.180087 | orchestrator | 2025-06-02 00:47:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:54.215143 | orchestrator | 2025-06-02 00:47:54 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:54.216540 | orchestrator | 2025-06-02 00:47:54 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:54.216734 | orchestrator | 2025-06-02 00:47:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:47:57.272917 | orchestrator | 2025-06-02 00:47:57 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:47:57.274183 | orchestrator | 2025-06-02 00:47:57 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:47:57.274231 | orchestrator | 2025-06-02 00:47:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:00.311146 | orchestrator | 2025-06-02 00:48:00 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:00.311259 | orchestrator | 2025-06-02 00:48:00 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:00.311274 | orchestrator | 2025-06-02 00:48:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:03.351321 | orchestrator | 2025-06-02 00:48:03 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:03.351709 | orchestrator | 2025-06-02 00:48:03 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:03.351741 | orchestrator | 2025-06-02 00:48:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:06.391432 | orchestrator | 2025-06-02 00:48:06 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:06.392247 | orchestrator | 2025-06-02 00:48:06 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:06.392282 | orchestrator | 2025-06-02 00:48:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:09.423938 | orchestrator | 2025-06-02 00:48:09 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:09.424334 | orchestrator | 2025-06-02 00:48:09 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:09.424366 | orchestrator | 2025-06-02 00:48:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:12.462418 | orchestrator | 2025-06-02 00:48:12 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:12.462521 | orchestrator | 2025-06-02 00:48:12 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:12.462605 | orchestrator | 2025-06-02 00:48:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:15.502695 | orchestrator | 2025-06-02 00:48:15 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:15.502966 | orchestrator | 2025-06-02 00:48:15 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:15.502992 | orchestrator | 2025-06-02 00:48:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:18.543681 | orchestrator | 2025-06-02 00:48:18 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:18.545306 | orchestrator | 2025-06-02 00:48:18 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:18.545432 | orchestrator | 2025-06-02 00:48:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:21.582866 | orchestrator | 2025-06-02 00:48:21 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:21.584802 | orchestrator | 2025-06-02 00:48:21 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:21.585080 | orchestrator | 2025-06-02 00:48:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:24.631353 | orchestrator | 2025-06-02 00:48:24 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:24.632027 | orchestrator | 2025-06-02 00:48:24 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:24.632063 | orchestrator | 2025-06-02 00:48:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:27.683117 | orchestrator | 2025-06-02 00:48:27 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:27.684641 | orchestrator | 2025-06-02 00:48:27 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:27.684888 | orchestrator | 2025-06-02 00:48:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:30.731395 | orchestrator | 2025-06-02 00:48:30 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:30.733101 | orchestrator | 2025-06-02 00:48:30 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:30.733172 | orchestrator | 2025-06-02 00:48:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:33.776854 | orchestrator | 2025-06-02 00:48:33 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:33.777875 | orchestrator | 2025-06-02 00:48:33 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:33.777906 | orchestrator | 2025-06-02 00:48:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:36.826235 | orchestrator | 2025-06-02 00:48:36 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:36.827692 | orchestrator | 2025-06-02 00:48:36 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:36.828185 | orchestrator | 2025-06-02 00:48:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:39.876611 | orchestrator | 2025-06-02 00:48:39 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:39.878694 | orchestrator | 2025-06-02 00:48:39 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:39.878873 | orchestrator | 2025-06-02 00:48:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:42.922983 | orchestrator | 2025-06-02 00:48:42 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:42.925298 | orchestrator | 2025-06-02 00:48:42 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:42.925333 | orchestrator | 2025-06-02 00:48:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:45.967350 | orchestrator | 2025-06-02 00:48:45 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:45.967512 | orchestrator | 2025-06-02 00:48:45 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:45.967530 | orchestrator | 2025-06-02 00:48:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:49.025010 | orchestrator | 2025-06-02 00:48:49 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:49.025236 | orchestrator | 2025-06-02 00:48:49 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:49.025343 | orchestrator | 2025-06-02 00:48:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:52.071986 | orchestrator | 2025-06-02 00:48:52 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:52.074715 | orchestrator | 2025-06-02 00:48:52 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:52.074752 | orchestrator | 2025-06-02 00:48:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:55.122382 | orchestrator | 2025-06-02 00:48:55 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:55.123562 | orchestrator | 2025-06-02 00:48:55 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:55.123603 | orchestrator | 2025-06-02 00:48:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:48:58.164685 | orchestrator | 2025-06-02 00:48:58 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:48:58.168110 | orchestrator | 2025-06-02 00:48:58 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:48:58.168165 | orchestrator | 2025-06-02 00:48:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:01.212240 | orchestrator | 2025-06-02 00:49:01 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:01.212623 | orchestrator | 2025-06-02 00:49:01 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:01.212652 | orchestrator | 2025-06-02 00:49:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:04.255993 | orchestrator | 2025-06-02 00:49:04 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:04.257296 | orchestrator | 2025-06-02 00:49:04 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:04.257335 | orchestrator | 2025-06-02 00:49:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:07.312353 | orchestrator | 2025-06-02 00:49:07 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:07.312504 | orchestrator | 2025-06-02 00:49:07 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:07.314655 | orchestrator | 2025-06-02 00:49:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:10.359603 | orchestrator | 2025-06-02 00:49:10 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:10.361775 | orchestrator | 2025-06-02 00:49:10 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:10.361968 | orchestrator | 2025-06-02 00:49:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:13.403799 | orchestrator | 2025-06-02 00:49:13 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:13.404591 | orchestrator | 2025-06-02 00:49:13 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:13.404625 | orchestrator | 2025-06-02 00:49:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:16.439832 | orchestrator | 2025-06-02 00:49:16 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:16.440602 | orchestrator | 2025-06-02 00:49:16 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:16.440661 | orchestrator | 2025-06-02 00:49:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:19.502400 | orchestrator | 2025-06-02 00:49:19 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:19.503579 | orchestrator | 2025-06-02 00:49:19 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:19.504282 | orchestrator | 2025-06-02 00:49:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:22.550100 | orchestrator | 2025-06-02 00:49:22 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:22.551641 | orchestrator | 2025-06-02 00:49:22 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:22.551672 | orchestrator | 2025-06-02 00:49:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:25.606791 | orchestrator | 2025-06-02 00:49:25 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:25.609116 | orchestrator | 2025-06-02 00:49:25 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:25.609627 | orchestrator | 2025-06-02 00:49:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:28.651848 | orchestrator | 2025-06-02 00:49:28 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:28.653147 | orchestrator | 2025-06-02 00:49:28 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:28.653204 | orchestrator | 2025-06-02 00:49:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:31.696473 | orchestrator | 2025-06-02 00:49:31 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:31.697581 | orchestrator | 2025-06-02 00:49:31 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:31.697613 | orchestrator | 2025-06-02 00:49:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:34.741458 | orchestrator | 2025-06-02 00:49:34 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:34.742144 | orchestrator | 2025-06-02 00:49:34 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:34.742198 | orchestrator | 2025-06-02 00:49:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:37.783636 | orchestrator | 2025-06-02 00:49:37 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:37.784926 | orchestrator | 2025-06-02 00:49:37 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:37.784955 | orchestrator | 2025-06-02 00:49:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:40.827719 | orchestrator | 2025-06-02 00:49:40 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:40.828099 | orchestrator | 2025-06-02 00:49:40 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:40.828131 | orchestrator | 2025-06-02 00:49:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:43.868481 | orchestrator | 2025-06-02 00:49:43 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:43.871172 | orchestrator | 2025-06-02 00:49:43 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:43.871197 | orchestrator | 2025-06-02 00:49:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:46.910783 | orchestrator | 2025-06-02 00:49:46 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:46.911424 | orchestrator | 2025-06-02 00:49:46 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:46.912049 | orchestrator | 2025-06-02 00:49:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:49.962668 | orchestrator | 2025-06-02 00:49:49 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:49.963975 | orchestrator | 2025-06-02 00:49:49 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:49.964058 | orchestrator | 2025-06-02 00:49:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:53.013316 | orchestrator | 2025-06-02 00:49:53 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:53.013430 | orchestrator | 2025-06-02 00:49:53 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:53.013445 | orchestrator | 2025-06-02 00:49:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:56.072018 | orchestrator | 2025-06-02 00:49:56 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:56.073146 | orchestrator | 2025-06-02 00:49:56 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:56.073207 | orchestrator | 2025-06-02 00:49:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:49:59.119877 | orchestrator | 2025-06-02 00:49:59 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:49:59.121674 | orchestrator | 2025-06-02 00:49:59 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:49:59.121849 | orchestrator | 2025-06-02 00:49:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:02.168556 | orchestrator | 2025-06-02 00:50:02 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:50:02.169148 | orchestrator | 2025-06-02 00:50:02 | INFO  | Task c2c30f6f-e20c-46b0-b667-aff12a5ca853 is in state STARTED 2025-06-02 00:50:02.172385 | orchestrator | 2025-06-02 00:50:02 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:02.174303 | orchestrator | 2025-06-02 00:50:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:05.215684 | orchestrator | 2025-06-02 00:50:05 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state STARTED 2025-06-02 00:50:05.216445 | orchestrator | 2025-06-02 00:50:05 | INFO  | Task c2c30f6f-e20c-46b0-b667-aff12a5ca853 is in state STARTED 2025-06-02 00:50:05.217663 | orchestrator | 2025-06-02 00:50:05 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:05.218502 | orchestrator | 2025-06-02 00:50:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:08.257114 | orchestrator | 2025-06-02 00:50:08 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:08.266276 | orchestrator | 2025-06-02 00:50:08 | INFO  | Task d3c23406-1703-4a18-a24e-bb67e3b5a131 is in state SUCCESS 2025-06-02 00:50:08.268402 | orchestrator | 2025-06-02 00:50:08.268435 | orchestrator | 2025-06-02 00:50:08.268448 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:50:08.268459 | orchestrator | 2025-06-02 00:50:08.268470 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:50:08.268495 | orchestrator | Monday 02 June 2025 00:44:08 +0000 (0:00:00.251) 0:00:00.251 *********** 2025-06-02 00:50:08.268663 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.268681 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.268694 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.268705 | orchestrator | 2025-06-02 00:50:08.268716 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:50:08.268727 | orchestrator | Monday 02 June 2025 00:44:08 +0000 (0:00:00.428) 0:00:00.679 *********** 2025-06-02 00:50:08.268805 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-02 00:50:08.268819 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-02 00:50:08.268830 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-02 00:50:08.268841 | orchestrator | 2025-06-02 00:50:08.268852 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-02 00:50:08.268863 | orchestrator | 2025-06-02 00:50:08.268874 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 00:50:08.268885 | orchestrator | Monday 02 June 2025 00:44:09 +0000 (0:00:00.884) 0:00:01.563 *********** 2025-06-02 00:50:08.268921 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.268935 | orchestrator | 2025-06-02 00:50:08.268946 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-02 00:50:08.268957 | orchestrator | Monday 02 June 2025 00:44:10 +0000 (0:00:01.376) 0:00:02.940 *********** 2025-06-02 00:50:08.268969 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.268980 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.268991 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.269002 | orchestrator | 2025-06-02 00:50:08.269015 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 00:50:08.269029 | orchestrator | Monday 02 June 2025 00:44:11 +0000 (0:00:00.771) 0:00:03.711 *********** 2025-06-02 00:50:08.269042 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.269055 | orchestrator | 2025-06-02 00:50:08.269120 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-02 00:50:08.269134 | orchestrator | Monday 02 June 2025 00:44:12 +0000 (0:00:01.330) 0:00:05.042 *********** 2025-06-02 00:50:08.269148 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.269190 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.269292 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.269306 | orchestrator | 2025-06-02 00:50:08.269319 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-02 00:50:08.269333 | orchestrator | Monday 02 June 2025 00:44:13 +0000 (0:00:00.818) 0:00:05.860 *********** 2025-06-02 00:50:08.269346 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 00:50:08.269359 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 00:50:08.269370 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 00:50:08.269381 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 00:50:08.269391 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 00:50:08.269403 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 00:50:08.269414 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 00:50:08.269424 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 00:50:08.269435 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 00:50:08.269446 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 00:50:08.269456 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 00:50:08.269467 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 00:50:08.269478 | orchestrator | 2025-06-02 00:50:08.269489 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 00:50:08.269500 | orchestrator | Monday 02 June 2025 00:44:17 +0000 (0:00:03.387) 0:00:09.248 *********** 2025-06-02 00:50:08.269533 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 00:50:08.269546 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 00:50:08.269557 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 00:50:08.269568 | orchestrator | 2025-06-02 00:50:08.269579 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 00:50:08.269590 | orchestrator | Monday 02 June 2025 00:44:17 +0000 (0:00:00.738) 0:00:09.986 *********** 2025-06-02 00:50:08.269601 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 00:50:08.269612 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 00:50:08.269815 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 00:50:08.269826 | orchestrator | 2025-06-02 00:50:08.269838 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 00:50:08.269849 | orchestrator | Monday 02 June 2025 00:44:19 +0000 (0:00:01.548) 0:00:11.534 *********** 2025-06-02 00:50:08.269860 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-02 00:50:08.269871 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.269895 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-02 00:50:08.269907 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.269918 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-02 00:50:08.269929 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.269940 | orchestrator | 2025-06-02 00:50:08.269959 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-02 00:50:08.269970 | orchestrator | Monday 02 June 2025 00:44:20 +0000 (0:00:00.997) 0:00:12.532 *********** 2025-06-02 00:50:08.269994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.270010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.270071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.270083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.270096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.270116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.270140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 00:50:08.270153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 00:50:08.270164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 00:50:08.270176 | orchestrator | 2025-06-02 00:50:08.270188 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-02 00:50:08.270225 | orchestrator | Monday 02 June 2025 00:44:22 +0000 (0:00:02.502) 0:00:15.035 *********** 2025-06-02 00:50:08.270238 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.270249 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.270261 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.270272 | orchestrator | 2025-06-02 00:50:08.270326 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-02 00:50:08.270338 | orchestrator | Monday 02 June 2025 00:44:24 +0000 (0:00:01.250) 0:00:16.285 *********** 2025-06-02 00:50:08.270349 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-02 00:50:08.270360 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-02 00:50:08.270371 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-02 00:50:08.270382 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-02 00:50:08.270393 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-02 00:50:08.270404 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-02 00:50:08.270415 | orchestrator | 2025-06-02 00:50:08.270426 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-02 00:50:08.270437 | orchestrator | Monday 02 June 2025 00:44:26 +0000 (0:00:02.676) 0:00:18.962 *********** 2025-06-02 00:50:08.270448 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.270459 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.270470 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.270481 | orchestrator | 2025-06-02 00:50:08.270492 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-02 00:50:08.270503 | orchestrator | Monday 02 June 2025 00:44:28 +0000 (0:00:02.075) 0:00:21.037 *********** 2025-06-02 00:50:08.270514 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.270525 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.270536 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.270547 | orchestrator | 2025-06-02 00:50:08.270558 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-02 00:50:08.270569 | orchestrator | Monday 02 June 2025 00:44:30 +0000 (0:00:01.233) 0:00:22.271 *********** 2025-06-02 00:50:08.270580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.270613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.270626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.270638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.270650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.270662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__94c4f483e571aa3556ac897601c293f9053e6090', '__omit_place_holder__94c4f483e571aa3556ac897601c293f9053e6090'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 00:50:08.270673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.270691 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.270708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__94c4f483e571aa3556ac897601c293f9053e6090', '__omit_place_holder__94c4f483e571aa3556ac897601c293f9053e6090'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 00:50:08.270720 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.270736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.270748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.270759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.270771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__94c4f483e571aa3556ac897601c293f9053e6090', '__omit_place_holder__94c4f483e571aa3556ac897601c293f9053e6090'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 00:50:08.270782 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.270793 | orchestrator | 2025-06-02 00:50:08.270804 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-02 00:50:08.270815 | orchestrator | Monday 02 June 2025 00:44:30 +0000 (0:00:00.807) 0:00:23.078 *********** 2025-06-02 00:50:08.270832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.270849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.270866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.270878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.270889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.270901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__94c4f483e571aa3556ac897601c293f9053e6090', '__omit_place_holder__94c4f483e571aa3556ac897601c293f9053e6090'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 00:50:08.270924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.270936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.270959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__94c4f483e571aa3556ac897601c293f9053e6090', '__omit_place_holder__94c4f483e571aa3556ac897601c293f9053e6090'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 00:50:08.271150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.271162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.271174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__94c4f483e571aa3556ac897601c293f9053e6090', '__omit_place_holder__94c4f483e571aa3556ac897601c293f9053e6090'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 00:50:08.271185 | orchestrator | 2025-06-02 00:50:08.271249 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-02 00:50:08.271264 | orchestrator | Monday 02 June 2025 00:44:34 +0000 (0:00:03.447) 0:00:26.526 *********** 2025-06-02 00:50:08.271290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.271302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.271328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.271340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.271352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.271363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.271381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 00:50:08.271393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 00:50:08.271404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 00:50:08.271416 | orchestrator | 2025-06-02 00:50:08.271427 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-02 00:50:08.271438 | orchestrator | Monday 02 June 2025 00:44:38 +0000 (0:00:03.828) 0:00:30.354 *********** 2025-06-02 00:50:08.271449 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 00:50:08.271466 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 00:50:08.271478 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 00:50:08.271489 | orchestrator | 2025-06-02 00:50:08.271500 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-02 00:50:08.271511 | orchestrator | Monday 02 June 2025 00:44:40 +0000 (0:00:02.048) 0:00:32.403 *********** 2025-06-02 00:50:08.271522 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 00:50:08.271533 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 00:50:08.271544 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 00:50:08.271555 | orchestrator | 2025-06-02 00:50:08.271566 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-02 00:50:08.271633 | orchestrator | Monday 02 June 2025 00:44:45 +0000 (0:00:04.848) 0:00:37.251 *********** 2025-06-02 00:50:08.271645 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.271656 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.271667 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.271678 | orchestrator | 2025-06-02 00:50:08.271689 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-02 00:50:08.271700 | orchestrator | Monday 02 June 2025 00:44:45 +0000 (0:00:00.762) 0:00:38.013 *********** 2025-06-02 00:50:08.271712 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 00:50:08.271723 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 00:50:08.271741 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 00:50:08.271752 | orchestrator | 2025-06-02 00:50:08.271763 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-02 00:50:08.271774 | orchestrator | Monday 02 June 2025 00:44:48 +0000 (0:00:02.873) 0:00:40.887 *********** 2025-06-02 00:50:08.271785 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 00:50:08.271796 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 00:50:08.271807 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 00:50:08.271818 | orchestrator | 2025-06-02 00:50:08.271829 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-02 00:50:08.271840 | orchestrator | Monday 02 June 2025 00:44:50 +0000 (0:00:01.838) 0:00:42.725 *********** 2025-06-02 00:50:08.271851 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-02 00:50:08.271862 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-02 00:50:08.271873 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-02 00:50:08.271884 | orchestrator | 2025-06-02 00:50:08.271895 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-02 00:50:08.271906 | orchestrator | Monday 02 June 2025 00:44:52 +0000 (0:00:01.513) 0:00:44.238 *********** 2025-06-02 00:50:08.271944 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-02 00:50:08.271956 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-02 00:50:08.271967 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-02 00:50:08.271978 | orchestrator | 2025-06-02 00:50:08.271989 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 00:50:08.272000 | orchestrator | Monday 02 June 2025 00:44:53 +0000 (0:00:01.388) 0:00:45.627 *********** 2025-06-02 00:50:08.272011 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.272022 | orchestrator | 2025-06-02 00:50:08.272033 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-02 00:50:08.272043 | orchestrator | Monday 02 June 2025 00:44:54 +0000 (0:00:00.712) 0:00:46.340 *********** 2025-06-02 00:50:08.272055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.272079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.272092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.272111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.272123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.272134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.272146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 00:50:08.272157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 00:50:08.272189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 00:50:08.272233 | orchestrator | 2025-06-02 00:50:08.272245 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-02 00:50:08.272256 | orchestrator | Monday 02 June 2025 00:44:57 +0000 (0:00:03.299) 0:00:49.639 *********** 2025-06-02 00:50:08.272267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.272279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.272290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.272302 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.272313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.272325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.272343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.272366 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.272377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.272622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.272635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.272647 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.272658 | orchestrator | 2025-06-02 00:50:08.272669 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-02 00:50:08.272698 | orchestrator | Monday 02 June 2025 00:44:58 +0000 (0:00:00.661) 0:00:50.301 *********** 2025-06-02 00:50:08.272710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.272722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.272740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.272759 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.272775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.272787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.272799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.272810 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.272821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.272833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.272844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.272862 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.272874 | orchestrator | 2025-06-02 00:50:08.272885 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 00:50:08.272896 | orchestrator | Monday 02 June 2025 00:44:59 +0000 (0:00:01.399) 0:00:51.701 *********** 2025-06-02 00:50:08.272952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.272968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.272979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.272990 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.273002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.273013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.273024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.273049 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.273067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.273084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.273096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.273107 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.273118 | orchestrator | 2025-06-02 00:50:08.273129 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 00:50:08.273140 | orchestrator | Monday 02 June 2025 00:45:00 +0000 (0:00:01.411) 0:00:53.112 *********** 2025-06-02 00:50:08.273151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.273162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.273174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.273191 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.273231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.273264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.273277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.273289 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.273300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.273311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.273323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.273341 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.273352 | orchestrator | 2025-06-02 00:50:08.273363 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 00:50:08.273374 | orchestrator | Monday 02 June 2025 00:45:01 +0000 (0:00:00.810) 0:00:53.922 *********** 2025-06-02 00:50:08.273385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.273407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.273420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.273431 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.273442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.273454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.273465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.273482 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.273493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.273509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.273525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.273537 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.273548 | orchestrator | 2025-06-02 00:50:08.273559 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-02 00:50:08.273570 | orchestrator | Monday 02 June 2025 00:45:03 +0000 (0:00:01.278) 0:00:55.201 *********** 2025-06-02 00:50:08.273581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.273593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.273604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.273923 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.273938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.273950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.273974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.273986 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.273998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.274009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.278165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.278243 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.278261 | orchestrator | 2025-06-02 00:50:08.278275 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-02 00:50:08.278287 | orchestrator | Monday 02 June 2025 00:45:03 +0000 (0:00:00.516) 0:00:55.718 *********** 2025-06-02 00:50:08.278299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.278311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.278341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.278353 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.278373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.278386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.278397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.278416 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.278428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.278440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.278451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.278462 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.278473 | orchestrator | 2025-06-02 00:50:08.278485 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-02 00:50:08.278505 | orchestrator | Monday 02 June 2025 00:45:04 +0000 (0:00:00.489) 0:00:56.207 *********** 2025-06-02 00:50:08.278522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.278534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.278562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.278573 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.278585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.278596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.278608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.278619 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.278641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 00:50:08.278654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 00:50:08.278665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 00:50:08.278682 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.278694 | orchestrator | 2025-06-02 00:50:08.278719 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-02 00:50:08.278740 | orchestrator | Monday 02 June 2025 00:45:04 +0000 (0:00:00.899) 0:00:57.106 *********** 2025-06-02 00:50:08.278752 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 00:50:08.278763 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 00:50:08.278774 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 00:50:08.278785 | orchestrator | 2025-06-02 00:50:08.278797 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-02 00:50:08.278808 | orchestrator | Monday 02 June 2025 00:45:06 +0000 (0:00:01.300) 0:00:58.407 *********** 2025-06-02 00:50:08.278818 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 00:50:08.278830 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 00:50:08.278841 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 00:50:08.278852 | orchestrator | 2025-06-02 00:50:08.278862 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-02 00:50:08.278873 | orchestrator | Monday 02 June 2025 00:45:07 +0000 (0:00:01.290) 0:00:59.698 *********** 2025-06-02 00:50:08.278884 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 00:50:08.278895 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 00:50:08.278906 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 00:50:08.278917 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 00:50:08.278928 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.278939 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 00:50:08.278950 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.278962 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 00:50:08.278973 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.278984 | orchestrator | 2025-06-02 00:50:08.278995 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-02 00:50:08.279006 | orchestrator | Monday 02 June 2025 00:45:08 +0000 (0:00:00.925) 0:01:00.623 *********** 2025-06-02 00:50:08.279024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.279047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.279059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 00:50:08.279070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.279082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.279094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 00:50:08.279105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 00:50:08.279128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 00:50:08.279147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 00:50:08.279158 | orchestrator | 2025-06-02 00:50:08.279169 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-02 00:50:08.279180 | orchestrator | Monday 02 June 2025 00:45:11 +0000 (0:00:02.799) 0:01:03.423 *********** 2025-06-02 00:50:08.279191 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.279217 | orchestrator | 2025-06-02 00:50:08.279228 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-02 00:50:08.279239 | orchestrator | Monday 02 June 2025 00:45:12 +0000 (0:00:00.886) 0:01:04.310 *********** 2025-06-02 00:50:08.279252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 00:50:08.279265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 00:50:08.279277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.279300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 00:50:08.279319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.279331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.279343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.279354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.279366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.279378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.279405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.279418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.279429 | orchestrator | 2025-06-02 00:50:08.279441 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-02 00:50:08.279452 | orchestrator | Monday 02 June 2025 00:45:16 +0000 (0:00:04.326) 0:01:08.637 *********** 2025-06-02 00:50:08.279464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 00:50:08.279476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.279487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.279499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.279528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 00:50:08.279540 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.279555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.279576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.279588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.279599 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.279610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 00:50:08.279628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.279645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.279680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.279692 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.279703 | orchestrator | 2025-06-02 00:50:08.279715 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-02 00:50:08.279726 | orchestrator | Monday 02 June 2025 00:45:17 +0000 (0:00:00.579) 0:01:09.216 *********** 2025-06-02 00:50:08.279737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 00:50:08.279750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 00:50:08.279761 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.279773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 00:50:08.279784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 00:50:08.279795 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.279807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 00:50:08.279818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 00:50:08.279829 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.279840 | orchestrator | 2025-06-02 00:50:08.279851 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-02 00:50:08.279873 | orchestrator | Monday 02 June 2025 00:45:17 +0000 (0:00:00.828) 0:01:10.045 *********** 2025-06-02 00:50:08.279885 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.279896 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.279907 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.279918 | orchestrator | 2025-06-02 00:50:08.279929 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-02 00:50:08.279940 | orchestrator | Monday 02 June 2025 00:45:19 +0000 (0:00:01.267) 0:01:11.312 *********** 2025-06-02 00:50:08.279951 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.279961 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.279973 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.279984 | orchestrator | 2025-06-02 00:50:08.279995 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-02 00:50:08.280006 | orchestrator | Monday 02 June 2025 00:45:21 +0000 (0:00:01.871) 0:01:13.183 *********** 2025-06-02 00:50:08.280017 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.280027 | orchestrator | 2025-06-02 00:50:08.280038 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-02 00:50:08.280049 | orchestrator | Monday 02 June 2025 00:45:21 +0000 (0:00:00.544) 0:01:13.728 *********** 2025-06-02 00:50:08.280083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.280098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.280111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.280123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.280141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.280153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.280175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.280188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.280234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.280253 | orchestrator | 2025-06-02 00:50:08.280264 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-02 00:50:08.280276 | orchestrator | Monday 02 June 2025 00:45:25 +0000 (0:00:04.283) 0:01:18.011 *********** 2025-06-02 00:50:08.280287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.280299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.280317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.280334 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.280346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.280357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.280375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.280386 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.280397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.280471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.280492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.280504 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.280515 | orchestrator | 2025-06-02 00:50:08.280526 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-02 00:50:08.280572 | orchestrator | Monday 02 June 2025 00:45:26 +0000 (0:00:00.534) 0:01:18.546 *********** 2025-06-02 00:50:08.280586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 00:50:08.280597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 00:50:08.280617 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.280629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 00:50:08.280640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 00:50:08.280651 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.280662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 00:50:08.280673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 00:50:08.280685 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.280696 | orchestrator | 2025-06-02 00:50:08.280707 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-02 00:50:08.280718 | orchestrator | Monday 02 June 2025 00:45:27 +0000 (0:00:00.730) 0:01:19.277 *********** 2025-06-02 00:50:08.280729 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.280740 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.280751 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.280762 | orchestrator | 2025-06-02 00:50:08.280773 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-02 00:50:08.280784 | orchestrator | Monday 02 June 2025 00:45:28 +0000 (0:00:01.419) 0:01:20.696 *********** 2025-06-02 00:50:08.280795 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.280806 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.280817 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.280828 | orchestrator | 2025-06-02 00:50:08.280839 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-02 00:50:08.280850 | orchestrator | Monday 02 June 2025 00:45:30 +0000 (0:00:01.745) 0:01:22.441 *********** 2025-06-02 00:50:08.280861 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.280872 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.280883 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.280893 | orchestrator | 2025-06-02 00:50:08.280905 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-02 00:50:08.280916 | orchestrator | Monday 02 June 2025 00:45:30 +0000 (0:00:00.253) 0:01:22.695 *********** 2025-06-02 00:50:08.280927 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.280938 | orchestrator | 2025-06-02 00:50:08.280949 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-02 00:50:08.280959 | orchestrator | Monday 02 June 2025 00:45:31 +0000 (0:00:00.558) 0:01:23.253 *********** 2025-06-02 00:50:08.280983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 00:50:08.281002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 00:50:08.281015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 00:50:08.281026 | orchestrator | 2025-06-02 00:50:08.281037 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-02 00:50:08.281048 | orchestrator | Monday 02 June 2025 00:45:34 +0000 (0:00:03.034) 0:01:26.288 *********** 2025-06-02 00:50:08.281060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 00:50:08.281071 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.281083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 00:50:08.281095 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.281116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 00:50:08.281135 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.281146 | orchestrator | 2025-06-02 00:50:08.281157 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-02 00:50:08.281168 | orchestrator | Monday 02 June 2025 00:45:35 +0000 (0:00:01.321) 0:01:27.610 *********** 2025-06-02 00:50:08.281180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 00:50:08.281229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 00:50:08.281243 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.281255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 00:50:08.281267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 00:50:08.281279 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.281290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 00:50:08.281302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 00:50:08.281313 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.281324 | orchestrator | 2025-06-02 00:50:08.281335 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-02 00:50:08.281346 | orchestrator | Monday 02 June 2025 00:45:37 +0000 (0:00:01.634) 0:01:29.244 *********** 2025-06-02 00:50:08.281364 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.281375 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.281386 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.281397 | orchestrator | 2025-06-02 00:50:08.281409 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-02 00:50:08.281420 | orchestrator | Monday 02 June 2025 00:45:37 +0000 (0:00:00.825) 0:01:30.070 *********** 2025-06-02 00:50:08.281430 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.281441 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.281453 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.281464 | orchestrator | 2025-06-02 00:50:08.281475 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-02 00:50:08.281492 | orchestrator | Monday 02 June 2025 00:45:38 +0000 (0:00:00.990) 0:01:31.060 *********** 2025-06-02 00:50:08.281504 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.281515 | orchestrator | 2025-06-02 00:50:08.281525 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-02 00:50:08.281542 | orchestrator | Monday 02 June 2025 00:45:39 +0000 (0:00:00.849) 0:01:31.910 *********** 2025-06-02 00:50:08.281554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.281567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.281623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.281636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281736 | orchestrator | 2025-06-02 00:50:08.281747 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-02 00:50:08.281758 | orchestrator | Monday 02 June 2025 00:45:43 +0000 (0:00:03.323) 0:01:35.234 *********** 2025-06-02 00:50:08.281770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.281782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281834 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.281845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.281857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281897 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.281918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.281931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.281970 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.281982 | orchestrator | 2025-06-02 00:50:08.281993 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-02 00:50:08.282004 | orchestrator | Monday 02 June 2025 00:45:44 +0000 (0:00:01.297) 0:01:36.531 *********** 2025-06-02 00:50:08.282045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 00:50:08.282060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 00:50:08.282072 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.282083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 00:50:08.282094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 00:50:08.282105 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.282122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 00:50:08.282138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 00:50:08.282150 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.282161 | orchestrator | 2025-06-02 00:50:08.282172 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-02 00:50:08.282183 | orchestrator | Monday 02 June 2025 00:45:45 +0000 (0:00:00.887) 0:01:37.419 *********** 2025-06-02 00:50:08.282210 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.282221 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.282232 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.282243 | orchestrator | 2025-06-02 00:50:08.282254 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-02 00:50:08.282265 | orchestrator | Monday 02 June 2025 00:45:46 +0000 (0:00:01.321) 0:01:38.740 *********** 2025-06-02 00:50:08.282276 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.282287 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.282298 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.282308 | orchestrator | 2025-06-02 00:50:08.282319 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-02 00:50:08.282330 | orchestrator | Monday 02 June 2025 00:45:49 +0000 (0:00:02.543) 0:01:41.283 *********** 2025-06-02 00:50:08.282340 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.282351 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.282362 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.282373 | orchestrator | 2025-06-02 00:50:08.282384 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-02 00:50:08.282395 | orchestrator | Monday 02 June 2025 00:45:49 +0000 (0:00:00.483) 0:01:41.767 *********** 2025-06-02 00:50:08.282406 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.282423 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.282434 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.282445 | orchestrator | 2025-06-02 00:50:08.282456 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-02 00:50:08.282467 | orchestrator | Monday 02 June 2025 00:45:49 +0000 (0:00:00.344) 0:01:42.111 *********** 2025-06-02 00:50:08.282478 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.282489 | orchestrator | 2025-06-02 00:50:08.282500 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-02 00:50:08.282510 | orchestrator | Monday 02 June 2025 00:45:50 +0000 (0:00:00.917) 0:01:43.029 *********** 2025-06-02 00:50:08.282522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 00:50:08.282534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 00:50:08.282560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 00:50:08.282577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 00:50:08.282606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 00:50:08.282629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 00:50:08.282674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282827 | orchestrator | 2025-06-02 00:50:08.282838 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-02 00:50:08.282849 | orchestrator | Monday 02 June 2025 00:45:54 +0000 (0:00:03.707) 0:01:46.736 *********** 2025-06-02 00:50:08.282866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 00:50:08.282882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 00:50:08.282899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 00:50:08.282911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 00:50:08.282934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.282996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.283007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.283019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.283030 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.283041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.283063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.283081 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.283092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 00:50:08.283104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 00:50:08.283115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.283127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.283138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.283167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.283185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.283211 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.283223 | orchestrator | 2025-06-02 00:50:08.283234 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-02 00:50:08.283245 | orchestrator | Monday 02 June 2025 00:45:55 +0000 (0:00:00.853) 0:01:47.589 *********** 2025-06-02 00:50:08.283257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 00:50:08.283268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 00:50:08.283279 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.283290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 00:50:08.283301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 00:50:08.283312 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.283323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 00:50:08.283334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 00:50:08.283345 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.283356 | orchestrator | 2025-06-02 00:50:08.283367 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-02 00:50:08.283378 | orchestrator | Monday 02 June 2025 00:45:56 +0000 (0:00:00.957) 0:01:48.547 *********** 2025-06-02 00:50:08.283388 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.283399 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.283410 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.283421 | orchestrator | 2025-06-02 00:50:08.283471 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-02 00:50:08.283484 | orchestrator | Monday 02 June 2025 00:45:58 +0000 (0:00:01.636) 0:01:50.184 *********** 2025-06-02 00:50:08.283495 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.283506 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.283517 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.283528 | orchestrator | 2025-06-02 00:50:08.283546 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-02 00:50:08.283569 | orchestrator | Monday 02 June 2025 00:45:59 +0000 (0:00:01.890) 0:01:52.075 *********** 2025-06-02 00:50:08.283580 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.283591 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.283602 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.283613 | orchestrator | 2025-06-02 00:50:08.283624 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-02 00:50:08.283634 | orchestrator | Monday 02 June 2025 00:46:00 +0000 (0:00:00.326) 0:01:52.401 *********** 2025-06-02 00:50:08.283645 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.283656 | orchestrator | 2025-06-02 00:50:08.283667 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-02 00:50:08.283678 | orchestrator | Monday 02 June 2025 00:46:01 +0000 (0:00:00.773) 0:01:53.175 *********** 2025-06-02 00:50:08.283705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:50:08.283720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 00:50:08.283763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:50:08.283778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 00:50:08.283812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:50:08.283826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 00:50:08.283839 | orchestrator | 2025-06-02 00:50:08.283850 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-02 00:50:08.283861 | orchestrator | Monday 02 June 2025 00:46:05 +0000 (0:00:04.035) 0:01:57.210 *********** 2025-06-02 00:50:08.283884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:50:08.283902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 00:50:08.283915 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.283927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:50:08.283961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 00:50:08.283974 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.283986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:50:08.284018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 00:50:08.284031 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.284042 | orchestrator | 2025-06-02 00:50:08.284053 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-02 00:50:08.284064 | orchestrator | Monday 02 June 2025 00:46:07 +0000 (0:00:02.630) 0:01:59.840 *********** 2025-06-02 00:50:08.284076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 00:50:08.284088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 00:50:08.284105 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.284117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 00:50:08.284129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 00:50:08.284140 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.284152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 00:50:08.284173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 00:50:08.284186 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.284249 | orchestrator | 2025-06-02 00:50:08.284263 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-02 00:50:08.284274 | orchestrator | Monday 02 June 2025 00:46:10 +0000 (0:00:02.900) 0:02:02.741 *********** 2025-06-02 00:50:08.284285 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.284296 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.284307 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.284318 | orchestrator | 2025-06-02 00:50:08.284328 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-02 00:50:08.284339 | orchestrator | Monday 02 June 2025 00:46:12 +0000 (0:00:01.478) 0:02:04.219 *********** 2025-06-02 00:50:08.284350 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.284361 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.284372 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.284383 | orchestrator | 2025-06-02 00:50:08.284394 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-02 00:50:08.284405 | orchestrator | Monday 02 June 2025 00:46:13 +0000 (0:00:01.887) 0:02:06.107 *********** 2025-06-02 00:50:08.284416 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.284427 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.284438 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.284456 | orchestrator | 2025-06-02 00:50:08.284467 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-02 00:50:08.284478 | orchestrator | Monday 02 June 2025 00:46:14 +0000 (0:00:00.323) 0:02:06.431 *********** 2025-06-02 00:50:08.284488 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.284499 | orchestrator | 2025-06-02 00:50:08.284510 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-02 00:50:08.284521 | orchestrator | Monday 02 June 2025 00:46:15 +0000 (0:00:00.801) 0:02:07.233 *********** 2025-06-02 00:50:08.284532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 00:50:08.284545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 00:50:08.284557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 00:50:08.284568 | orchestrator | 2025-06-02 00:50:08.284579 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-02 00:50:08.284590 | orchestrator | Monday 02 June 2025 00:46:18 +0000 (0:00:03.129) 0:02:10.362 *********** 2025-06-02 00:50:08.284613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 00:50:08.284625 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.284637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 00:50:08.284654 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.284666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 00:50:08.284677 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.284688 | orchestrator | 2025-06-02 00:50:08.284700 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-02 00:50:08.284710 | orchestrator | Monday 02 June 2025 00:46:18 +0000 (0:00:00.374) 0:02:10.737 *********** 2025-06-02 00:50:08.284722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 00:50:08.284733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 00:50:08.284744 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.284755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 00:50:08.284766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 00:50:08.284776 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.284786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 00:50:08.284795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 00:50:08.284805 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.284815 | orchestrator | 2025-06-02 00:50:08.284825 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-02 00:50:08.284834 | orchestrator | Monday 02 June 2025 00:46:19 +0000 (0:00:00.650) 0:02:11.387 *********** 2025-06-02 00:50:08.284844 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.284854 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.284864 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.284873 | orchestrator | 2025-06-02 00:50:08.284883 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-02 00:50:08.284893 | orchestrator | Monday 02 June 2025 00:46:20 +0000 (0:00:01.641) 0:02:13.029 *********** 2025-06-02 00:50:08.284903 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.284913 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.284923 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.284932 | orchestrator | 2025-06-02 00:50:08.284946 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-02 00:50:08.284961 | orchestrator | Monday 02 June 2025 00:46:22 +0000 (0:00:01.950) 0:02:14.980 *********** 2025-06-02 00:50:08.284971 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.284981 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.284991 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.285000 | orchestrator | 2025-06-02 00:50:08.285014 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-02 00:50:08.285024 | orchestrator | Monday 02 June 2025 00:46:23 +0000 (0:00:00.298) 0:02:15.278 *********** 2025-06-02 00:50:08.285033 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.285043 | orchestrator | 2025-06-02 00:50:08.285053 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-02 00:50:08.285062 | orchestrator | Monday 02 June 2025 00:46:23 +0000 (0:00:00.850) 0:02:16.129 *********** 2025-06-02 00:50:08.285073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 00:50:08.285245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 00:50:08.285275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 00:50:08.285287 | orchestrator | 2025-06-02 00:50:08.285297 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-02 00:50:08.285307 | orchestrator | Monday 02 June 2025 00:46:29 +0000 (0:00:05.151) 0:02:21.281 *********** 2025-06-02 00:50:08.285330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 00:50:08.285348 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.285359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 00:50:08.285375 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.285397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 00:50:08.285408 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.285418 | orchestrator | 2025-06-02 00:50:08.285428 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-02 00:50:08.285437 | orchestrator | Monday 02 June 2025 00:46:30 +0000 (0:00:00.928) 0:02:22.210 *********** 2025-06-02 00:50:08.285448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 00:50:08.285458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 00:50:08.285469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 00:50:08.285479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 00:50:08.285494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 00:50:08.285505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 00:50:08.285514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 00:50:08.285529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 00:50:08.285543 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.285553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 00:50:08.285563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 00:50:08.285574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 00:50:08.285583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 00:50:08.285593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 00:50:08.285603 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.285613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 00:50:08.285623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 00:50:08.285633 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.285643 | orchestrator | 2025-06-02 00:50:08.285653 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-02 00:50:08.285662 | orchestrator | Monday 02 June 2025 00:46:31 +0000 (0:00:01.050) 0:02:23.260 *********** 2025-06-02 00:50:08.285672 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.285682 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.285691 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.285706 | orchestrator | 2025-06-02 00:50:08.285716 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-02 00:50:08.285725 | orchestrator | Monday 02 June 2025 00:46:32 +0000 (0:00:01.437) 0:02:24.698 *********** 2025-06-02 00:50:08.285735 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.285745 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.285754 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.285764 | orchestrator | 2025-06-02 00:50:08.285773 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-02 00:50:08.285783 | orchestrator | Monday 02 June 2025 00:46:34 +0000 (0:00:01.880) 0:02:26.578 *********** 2025-06-02 00:50:08.285793 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.285802 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.285812 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.285822 | orchestrator | 2025-06-02 00:50:08.285834 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-02 00:50:08.285845 | orchestrator | Monday 02 June 2025 00:46:34 +0000 (0:00:00.290) 0:02:26.869 *********** 2025-06-02 00:50:08.285856 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.285868 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.285880 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.285892 | orchestrator | 2025-06-02 00:50:08.285903 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-02 00:50:08.285915 | orchestrator | Monday 02 June 2025 00:46:35 +0000 (0:00:00.292) 0:02:27.162 *********** 2025-06-02 00:50:08.285927 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.285938 | orchestrator | 2025-06-02 00:50:08.285947 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-02 00:50:08.285957 | orchestrator | Monday 02 June 2025 00:46:36 +0000 (0:00:01.653) 0:02:28.815 *********** 2025-06-02 00:50:08.285977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:50:08.285990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:50:08.286000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 00:50:08.286041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:50:08.286055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:50:08.286081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 00:50:08.286092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:50:08.286103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:50:08.286119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 00:50:08.286129 | orchestrator | 2025-06-02 00:50:08.286139 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-02 00:50:08.286149 | orchestrator | Monday 02 June 2025 00:46:42 +0000 (0:00:05.338) 0:02:34.154 *********** 2025-06-02 00:50:08.286160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 00:50:08.286184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:50:08.286210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 00:50:08.286221 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.286232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 00:50:08.286249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:50:08.286259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 00:50:08.286269 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.286285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 00:50:08.286301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:50:08.286311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 00:50:08.286329 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.286340 | orchestrator | 2025-06-02 00:50:08.286350 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-02 00:50:08.286360 | orchestrator | Monday 02 June 2025 00:46:42 +0000 (0:00:00.609) 0:02:34.764 *********** 2025-06-02 00:50:08.286370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 00:50:08.286380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 00:50:08.286390 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.286400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 00:50:08.286411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 00:50:08.286421 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.286430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 00:50:08.286441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 00:50:08.286450 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.286460 | orchestrator | 2025-06-02 00:50:08.286470 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-02 00:50:08.286479 | orchestrator | Monday 02 June 2025 00:46:43 +0000 (0:00:01.254) 0:02:36.018 *********** 2025-06-02 00:50:08.286489 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.286499 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.286508 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.286518 | orchestrator | 2025-06-02 00:50:08.286527 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-02 00:50:08.286537 | orchestrator | Monday 02 June 2025 00:46:45 +0000 (0:00:01.261) 0:02:37.279 *********** 2025-06-02 00:50:08.286546 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.286556 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.286565 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.286575 | orchestrator | 2025-06-02 00:50:08.286585 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-02 00:50:08.286599 | orchestrator | Monday 02 June 2025 00:46:46 +0000 (0:00:01.717) 0:02:38.997 *********** 2025-06-02 00:50:08.286609 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.286618 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.286628 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.286638 | orchestrator | 2025-06-02 00:50:08.286647 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-02 00:50:08.286661 | orchestrator | Monday 02 June 2025 00:46:47 +0000 (0:00:00.262) 0:02:39.260 *********** 2025-06-02 00:50:08.286679 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.286689 | orchestrator | 2025-06-02 00:50:08.286698 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-02 00:50:08.286708 | orchestrator | Monday 02 June 2025 00:46:48 +0000 (0:00:00.995) 0:02:40.255 *********** 2025-06-02 00:50:08.286718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 00:50:08.286728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.286739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 00:50:08.286749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.286768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 00:50:08.286785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.286795 | orchestrator | 2025-06-02 00:50:08.286805 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-02 00:50:08.286814 | orchestrator | Monday 02 June 2025 00:46:52 +0000 (0:00:04.586) 0:02:44.842 *********** 2025-06-02 00:50:08.286824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 00:50:08.286834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.286845 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.286859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 00:50:08.286879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.286889 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.286899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 00:50:08.286910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.286920 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.286930 | orchestrator | 2025-06-02 00:50:08.286940 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-02 00:50:08.286949 | orchestrator | Monday 02 June 2025 00:46:53 +0000 (0:00:00.594) 0:02:45.437 *********** 2025-06-02 00:50:08.286960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 00:50:08.286969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 00:50:08.286980 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.286990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 00:50:08.287000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 00:50:08.287015 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.287025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 00:50:08.287035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 00:50:08.287050 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.287060 | orchestrator | 2025-06-02 00:50:08.287070 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-02 00:50:08.287079 | orchestrator | Monday 02 June 2025 00:46:54 +0000 (0:00:01.074) 0:02:46.511 *********** 2025-06-02 00:50:08.287089 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.287102 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.287112 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.287122 | orchestrator | 2025-06-02 00:50:08.287132 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-02 00:50:08.287141 | orchestrator | Monday 02 June 2025 00:46:55 +0000 (0:00:01.152) 0:02:47.664 *********** 2025-06-02 00:50:08.287151 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.287160 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.287170 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.287180 | orchestrator | 2025-06-02 00:50:08.287189 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-02 00:50:08.287211 | orchestrator | Monday 02 June 2025 00:46:57 +0000 (0:00:01.752) 0:02:49.417 *********** 2025-06-02 00:50:08.287221 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.287231 | orchestrator | 2025-06-02 00:50:08.287240 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-02 00:50:08.287250 | orchestrator | Monday 02 June 2025 00:46:58 +0000 (0:00:00.995) 0:02:50.412 *********** 2025-06-02 00:50:08.287260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 00:50:08.287271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 00:50:08.287330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 00:50:08.287376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287416 | orchestrator | 2025-06-02 00:50:08.287426 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-02 00:50:08.287436 | orchestrator | Monday 02 June 2025 00:47:01 +0000 (0:00:03.482) 0:02:53.894 *********** 2025-06-02 00:50:08.287446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 00:50:08.287456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287491 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.287511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 00:50:08.287522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287561 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.287571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 00:50:08.287586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.287617 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.287627 | orchestrator | 2025-06-02 00:50:08.287637 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-02 00:50:08.287647 | orchestrator | Monday 02 June 2025 00:47:02 +0000 (0:00:00.663) 0:02:54.557 *********** 2025-06-02 00:50:08.287657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 00:50:08.287666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 00:50:08.287682 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.287692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 00:50:08.287701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 00:50:08.287711 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.287721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 00:50:08.287731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 00:50:08.287741 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.287750 | orchestrator | 2025-06-02 00:50:08.287760 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-02 00:50:08.287784 | orchestrator | Monday 02 June 2025 00:47:03 +0000 (0:00:00.870) 0:02:55.428 *********** 2025-06-02 00:50:08.287794 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.287804 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.287813 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.287823 | orchestrator | 2025-06-02 00:50:08.287833 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-02 00:50:08.287842 | orchestrator | Monday 02 June 2025 00:47:04 +0000 (0:00:01.568) 0:02:56.996 *********** 2025-06-02 00:50:08.287852 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.287861 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.287871 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.287881 | orchestrator | 2025-06-02 00:50:08.287891 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-02 00:50:08.287900 | orchestrator | Monday 02 June 2025 00:47:06 +0000 (0:00:01.991) 0:02:58.988 *********** 2025-06-02 00:50:08.287910 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.287919 | orchestrator | 2025-06-02 00:50:08.287929 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-02 00:50:08.287938 | orchestrator | Monday 02 June 2025 00:47:07 +0000 (0:00:01.129) 0:03:00.118 *********** 2025-06-02 00:50:08.287948 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 00:50:08.287958 | orchestrator | 2025-06-02 00:50:08.287968 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-02 00:50:08.287977 | orchestrator | Monday 02 June 2025 00:47:10 +0000 (0:00:02.806) 0:03:02.924 *********** 2025-06-02 00:50:08.287999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:50:08.288016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 00:50:08.288026 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.288043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:50:08.288058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 00:50:08.288073 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.288084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:50:08.288095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 00:50:08.288105 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.288115 | orchestrator | 2025-06-02 00:50:08.288125 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-02 00:50:08.288135 | orchestrator | Monday 02 June 2025 00:47:13 +0000 (0:00:02.485) 0:03:05.410 *********** 2025-06-02 00:50:08.288156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:50:08.288173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 00:50:08.288183 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.288238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:50:08.288262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 00:50:08.288273 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.288283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:50:08.288300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 00:50:08.288311 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.288321 | orchestrator | 2025-06-02 00:50:08.288331 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-02 00:50:08.288341 | orchestrator | Monday 02 June 2025 00:47:15 +0000 (0:00:02.151) 0:03:07.561 *********** 2025-06-02 00:50:08.288351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 00:50:08.288366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 00:50:08.288376 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.288396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 00:50:08.288406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 00:50:08.288417 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.288427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 00:50:08.288437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 00:50:08.288447 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.288456 | orchestrator | 2025-06-02 00:50:08.288466 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-02 00:50:08.288476 | orchestrator | Monday 02 June 2025 00:47:17 +0000 (0:00:02.486) 0:03:10.048 *********** 2025-06-02 00:50:08.288486 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.288496 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.288506 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.288515 | orchestrator | 2025-06-02 00:50:08.288523 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-02 00:50:08.288531 | orchestrator | Monday 02 June 2025 00:47:19 +0000 (0:00:01.939) 0:03:11.987 *********** 2025-06-02 00:50:08.288539 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.288547 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.288555 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.288563 | orchestrator | 2025-06-02 00:50:08.288571 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-02 00:50:08.288578 | orchestrator | Monday 02 June 2025 00:47:21 +0000 (0:00:01.332) 0:03:13.320 *********** 2025-06-02 00:50:08.288586 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.288594 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.288602 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.288610 | orchestrator | 2025-06-02 00:50:08.288618 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-02 00:50:08.288630 | orchestrator | Monday 02 June 2025 00:47:21 +0000 (0:00:00.304) 0:03:13.624 *********** 2025-06-02 00:50:08.288638 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.288646 | orchestrator | 2025-06-02 00:50:08.288654 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-02 00:50:08.288661 | orchestrator | Monday 02 June 2025 00:47:22 +0000 (0:00:01.049) 0:03:14.674 *********** 2025-06-02 00:50:08.288673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/loc2025-06-02 00:50:08 | INFO  | Task c5ab2128-9e41-4cf8-bd92-d55dc069d3c5 is in state STARTED 2025-06-02 00:50:08.288686 | orchestrator | 2025-06-02 00:50:08 | INFO  | Task c2c30f6f-e20c-46b0-b667-aff12a5ca853 is in state STARTED 2025-06-02 00:50:08.288695 | orchestrator | 2025-06-02 00:50:08 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:08.288703 | orchestrator | 2025-06-02 00:50:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:08.288822 | orchestrator | altime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 00:50:08.288836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 00:50:08.288846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 00:50:08.288854 | orchestrator | 2025-06-02 00:50:08.288862 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-02 00:50:08.288871 | orchestrator | Monday 02 June 2025 00:47:24 +0000 (0:00:01.657) 0:03:16.331 *********** 2025-06-02 00:50:08.288879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 00:50:08.288895 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.288904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 00:50:08.288913 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.288972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 00:50:08.288985 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.288993 | orchestrator | 2025-06-02 00:50:08.289001 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-02 00:50:08.289009 | orchestrator | Monday 02 June 2025 00:47:24 +0000 (0:00:00.367) 0:03:16.699 *********** 2025-06-02 00:50:08.289017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 00:50:08.289025 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.289033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 00:50:08.289042 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.289050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 00:50:08.289058 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.289066 | orchestrator | 2025-06-02 00:50:08.289073 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-02 00:50:08.289082 | orchestrator | Monday 02 June 2025 00:47:25 +0000 (0:00:00.559) 0:03:17.258 *********** 2025-06-02 00:50:08.289089 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.289097 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.289105 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.289113 | orchestrator | 2025-06-02 00:50:08.289121 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-02 00:50:08.289135 | orchestrator | Monday 02 June 2025 00:47:25 +0000 (0:00:00.683) 0:03:17.942 *********** 2025-06-02 00:50:08.289143 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.289151 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.289158 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.289166 | orchestrator | 2025-06-02 00:50:08.289174 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-02 00:50:08.289182 | orchestrator | Monday 02 June 2025 00:47:27 +0000 (0:00:01.223) 0:03:19.165 *********** 2025-06-02 00:50:08.289190 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.289210 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.289219 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.289226 | orchestrator | 2025-06-02 00:50:08.289235 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-02 00:50:08.289243 | orchestrator | Monday 02 June 2025 00:47:27 +0000 (0:00:00.295) 0:03:19.461 *********** 2025-06-02 00:50:08.289251 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.289258 | orchestrator | 2025-06-02 00:50:08.289266 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-02 00:50:08.289274 | orchestrator | Monday 02 June 2025 00:47:28 +0000 (0:00:01.347) 0:03:20.808 *********** 2025-06-02 00:50:08.289283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 00:50:08.289343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 00:50:08.289388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 00:50:08.289445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.289480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.289497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 00:50:08.289574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 00:50:08.289587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.289615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 00:50:08.289676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.289688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.289702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 00:50:08.289732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 00:50:08.289786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 00:50:08.289811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 00:50:08.289820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.289836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 00:50:08.289909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 00:50:08.289917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 00:50:08.289926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.289992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 00:50:08.290048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.290066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.290127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 00:50:08.290153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 00:50:08.290169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.290178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 00:50:08.290268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 00:50:08.290277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290285 | orchestrator | 2025-06-02 00:50:08.290294 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-02 00:50:08.290302 | orchestrator | Monday 02 June 2025 00:47:32 +0000 (0:00:04.186) 0:03:24.995 *********** 2025-06-02 00:50:08.290310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 00:50:08.290318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 00:50:08.290410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.290426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.290437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 00:50:08.290507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 00:50:08.290524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.290532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 00:50:08.290604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 00:50:08.290616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 00:50:08.290633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 00:50:08.290645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290731 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.290740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 00:50:08.290748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.290840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 00:50:08.290849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.290857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.290926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.290947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 00:50:08.290955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.290982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 00:50:08.291036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.291048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 00:50:08.291057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 00:50:08.291065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.291073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 00:50:08.291087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.291116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 00:50:08.291126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.291135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 00:50:08.291144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 00:50:08.291157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.291169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 00:50:08.291177 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.291242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.291254 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.291262 | orchestrator | 2025-06-02 00:50:08.291270 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-02 00:50:08.291278 | orchestrator | Monday 02 June 2025 00:47:34 +0000 (0:00:01.619) 0:03:26.614 *********** 2025-06-02 00:50:08.291286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 00:50:08.291294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 00:50:08.291303 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.291311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 00:50:08.291319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 00:50:08.291327 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.291335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 00:50:08.291343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 00:50:08.291357 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.291365 | orchestrator | 2025-06-02 00:50:08.291373 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-02 00:50:08.291381 | orchestrator | Monday 02 June 2025 00:47:36 +0000 (0:00:02.281) 0:03:28.896 *********** 2025-06-02 00:50:08.291389 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.291397 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.291405 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.291413 | orchestrator | 2025-06-02 00:50:08.291421 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-02 00:50:08.291429 | orchestrator | Monday 02 June 2025 00:47:38 +0000 (0:00:01.288) 0:03:30.185 *********** 2025-06-02 00:50:08.291437 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.291445 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.291453 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.291460 | orchestrator | 2025-06-02 00:50:08.291469 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-02 00:50:08.291476 | orchestrator | Monday 02 June 2025 00:47:40 +0000 (0:00:02.002) 0:03:32.187 *********** 2025-06-02 00:50:08.291484 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.291491 | orchestrator | 2025-06-02 00:50:08.291498 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-02 00:50:08.291504 | orchestrator | Monday 02 June 2025 00:47:41 +0000 (0:00:01.148) 0:03:33.335 *********** 2025-06-02 00:50:08.291514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.291538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.291546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.291558 | orchestrator | 2025-06-02 00:50:08.291564 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-02 00:50:08.291571 | orchestrator | Monday 02 June 2025 00:47:44 +0000 (0:00:03.281) 0:03:36.616 *********** 2025-06-02 00:50:08.291578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.291585 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.291592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.291599 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.291666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.291682 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.291690 | orchestrator | 2025-06-02 00:50:08.291698 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-02 00:50:08.291707 | orchestrator | Monday 02 June 2025 00:47:44 +0000 (0:00:00.502) 0:03:37.119 *********** 2025-06-02 00:50:08.291715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 00:50:08.291728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 00:50:08.291737 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.291745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 00:50:08.291753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 00:50:08.291761 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.291770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 00:50:08.291778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 00:50:08.291787 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.291794 | orchestrator | 2025-06-02 00:50:08.291802 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-02 00:50:08.291810 | orchestrator | Monday 02 June 2025 00:47:45 +0000 (0:00:00.768) 0:03:37.887 *********** 2025-06-02 00:50:08.291819 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.291827 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.291835 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.291843 | orchestrator | 2025-06-02 00:50:08.291851 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-02 00:50:08.291859 | orchestrator | Monday 02 June 2025 00:47:47 +0000 (0:00:01.550) 0:03:39.438 *********** 2025-06-02 00:50:08.291867 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.291875 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.291883 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.291891 | orchestrator | 2025-06-02 00:50:08.291899 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-02 00:50:08.291908 | orchestrator | Monday 02 June 2025 00:47:49 +0000 (0:00:01.960) 0:03:41.398 *********** 2025-06-02 00:50:08.291916 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.291924 | orchestrator | 2025-06-02 00:50:08.291932 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-02 00:50:08.291939 | orchestrator | Monday 02 June 2025 00:47:50 +0000 (0:00:01.205) 0:03:42.604 *********** 2025-06-02 00:50:08.291968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.291982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.291991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.292000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.292027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.292036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.292048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.292055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.292063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.292069 | orchestrator | 2025-06-02 00:50:08.292076 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-02 00:50:08.292083 | orchestrator | Monday 02 June 2025 00:47:54 +0000 (0:00:04.107) 0:03:46.711 *********** 2025-06-02 00:50:08.292093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.292115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.292127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.292134 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.292141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.292149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.292156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.292163 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.292187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.292213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.292221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.292228 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.292235 | orchestrator | 2025-06-02 00:50:08.292241 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-02 00:50:08.292248 | orchestrator | Monday 02 June 2025 00:47:55 +0000 (0:00:00.903) 0:03:47.615 *********** 2025-06-02 00:50:08.292255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 00:50:08.292262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 00:50:08.292269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 00:50:08.292276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 00:50:08.292283 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.292290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 00:50:08.292297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 00:50:08.292307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 00:50:08.292317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 00:50:08.292324 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.292346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 00:50:08.292354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 00:50:08.292361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 00:50:08.292368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 00:50:08.292375 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.292382 | orchestrator | 2025-06-02 00:50:08.292389 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-02 00:50:08.292395 | orchestrator | Monday 02 June 2025 00:47:56 +0000 (0:00:00.840) 0:03:48.455 *********** 2025-06-02 00:50:08.292402 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.292408 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.292415 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.292422 | orchestrator | 2025-06-02 00:50:08.292428 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-02 00:50:08.292435 | orchestrator | Monday 02 June 2025 00:47:57 +0000 (0:00:01.623) 0:03:50.079 *********** 2025-06-02 00:50:08.292442 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.292448 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.292455 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.292462 | orchestrator | 2025-06-02 00:50:08.292468 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-02 00:50:08.292475 | orchestrator | Monday 02 June 2025 00:47:59 +0000 (0:00:01.979) 0:03:52.058 *********** 2025-06-02 00:50:08.292481 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.292488 | orchestrator | 2025-06-02 00:50:08.292494 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-02 00:50:08.292501 | orchestrator | Monday 02 June 2025 00:48:01 +0000 (0:00:01.487) 0:03:53.546 *********** 2025-06-02 00:50:08.292508 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-02 00:50:08.292515 | orchestrator | 2025-06-02 00:50:08.292521 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-02 00:50:08.292528 | orchestrator | Monday 02 June 2025 00:48:02 +0000 (0:00:01.019) 0:03:54.566 *********** 2025-06-02 00:50:08.292535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 00:50:08.292547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 00:50:08.292554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 00:50:08.292561 | orchestrator | 2025-06-02 00:50:08.292568 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-02 00:50:08.292575 | orchestrator | Monday 02 June 2025 00:48:06 +0000 (0:00:03.807) 0:03:58.373 *********** 2025-06-02 00:50:08.292600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 00:50:08.292608 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.292616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 00:50:08.292623 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.292630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 00:50:08.292637 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.292643 | orchestrator | 2025-06-02 00:50:08.292650 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-02 00:50:08.292657 | orchestrator | Monday 02 June 2025 00:48:07 +0000 (0:00:01.213) 0:03:59.587 *********** 2025-06-02 00:50:08.292664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 00:50:08.292671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 00:50:08.292684 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.292691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 00:50:08.292698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 00:50:08.292705 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.292711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 00:50:08.292718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 00:50:08.292725 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.292732 | orchestrator | 2025-06-02 00:50:08.292738 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 00:50:08.292745 | orchestrator | Monday 02 June 2025 00:48:09 +0000 (0:00:01.712) 0:04:01.299 *********** 2025-06-02 00:50:08.292752 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.292758 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.292765 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.292772 | orchestrator | 2025-06-02 00:50:08.292778 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 00:50:08.292785 | orchestrator | Monday 02 June 2025 00:48:11 +0000 (0:00:02.235) 0:04:03.535 *********** 2025-06-02 00:50:08.292792 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.292798 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.292805 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.292811 | orchestrator | 2025-06-02 00:50:08.292818 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-02 00:50:08.292825 | orchestrator | Monday 02 June 2025 00:48:14 +0000 (0:00:02.889) 0:04:06.424 *********** 2025-06-02 00:50:08.292834 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-02 00:50:08.292841 | orchestrator | 2025-06-02 00:50:08.292848 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-02 00:50:08.292869 | orchestrator | Monday 02 June 2025 00:48:15 +0000 (0:00:00.811) 0:04:07.235 *********** 2025-06-02 00:50:08.292877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 00:50:08.292884 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.292891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 00:50:08.292902 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.292909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 00:50:08.292916 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.292923 | orchestrator | 2025-06-02 00:50:08.292929 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-02 00:50:08.292936 | orchestrator | Monday 02 June 2025 00:48:16 +0000 (0:00:01.306) 0:04:08.542 *********** 2025-06-02 00:50:08.292943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 00:50:08.292950 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.292957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 00:50:08.292964 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.292971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 00:50:08.292978 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.292985 | orchestrator | 2025-06-02 00:50:08.292991 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-02 00:50:08.292998 | orchestrator | Monday 02 June 2025 00:48:17 +0000 (0:00:01.538) 0:04:10.081 *********** 2025-06-02 00:50:08.293005 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.293014 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.293021 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.293027 | orchestrator | 2025-06-02 00:50:08.293034 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 00:50:08.293055 | orchestrator | Monday 02 June 2025 00:48:19 +0000 (0:00:01.112) 0:04:11.193 *********** 2025-06-02 00:50:08.293063 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.293070 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.293077 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.293083 | orchestrator | 2025-06-02 00:50:08.293090 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 00:50:08.293097 | orchestrator | Monday 02 June 2025 00:48:21 +0000 (0:00:02.203) 0:04:13.397 *********** 2025-06-02 00:50:08.293104 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.293114 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.293121 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.293127 | orchestrator | 2025-06-02 00:50:08.293134 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-02 00:50:08.293141 | orchestrator | Monday 02 June 2025 00:48:24 +0000 (0:00:02.782) 0:04:16.180 *********** 2025-06-02 00:50:08.293148 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-02 00:50:08.293154 | orchestrator | 2025-06-02 00:50:08.293161 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-02 00:50:08.293168 | orchestrator | Monday 02 June 2025 00:48:25 +0000 (0:00:00.986) 0:04:17.166 *********** 2025-06-02 00:50:08.293175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 00:50:08.293182 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.293189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 00:50:08.293207 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.293214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 00:50:08.293221 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.293228 | orchestrator | 2025-06-02 00:50:08.293235 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-02 00:50:08.293241 | orchestrator | Monday 02 June 2025 00:48:25 +0000 (0:00:00.946) 0:04:18.113 *********** 2025-06-02 00:50:08.293248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 00:50:08.293255 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.293262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 00:50:08.293275 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.293298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 00:50:08.293306 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.293313 | orchestrator | 2025-06-02 00:50:08.293320 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-02 00:50:08.293327 | orchestrator | Monday 02 June 2025 00:48:27 +0000 (0:00:01.165) 0:04:19.279 *********** 2025-06-02 00:50:08.293333 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.293340 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.293347 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.293353 | orchestrator | 2025-06-02 00:50:08.293360 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 00:50:08.293367 | orchestrator | Monday 02 June 2025 00:48:28 +0000 (0:00:01.620) 0:04:20.900 *********** 2025-06-02 00:50:08.293373 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.293380 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.293387 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.293394 | orchestrator | 2025-06-02 00:50:08.293400 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 00:50:08.293407 | orchestrator | Monday 02 June 2025 00:48:30 +0000 (0:00:02.165) 0:04:23.065 *********** 2025-06-02 00:50:08.293414 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.293420 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.293427 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.293434 | orchestrator | 2025-06-02 00:50:08.293440 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-02 00:50:08.293447 | orchestrator | Monday 02 June 2025 00:48:33 +0000 (0:00:02.987) 0:04:26.052 *********** 2025-06-02 00:50:08.293454 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.293460 | orchestrator | 2025-06-02 00:50:08.293467 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-02 00:50:08.293473 | orchestrator | Monday 02 June 2025 00:48:35 +0000 (0:00:01.308) 0:04:27.361 *********** 2025-06-02 00:50:08.293480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.293488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 00:50:08.293499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.293523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.293532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.293539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.293546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 00:50:08.293553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.293565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.293589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.293597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.293604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 00:50:08.293612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.293619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.293629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.293636 | orchestrator | 2025-06-02 00:50:08.293643 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-02 00:50:08.293650 | orchestrator | Monday 02 June 2025 00:48:38 +0000 (0:00:03.493) 0:04:30.854 *********** 2025-06-02 00:50:08.293674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.293682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 00:50:08.293689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.293696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.293703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.293714 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.293721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.293745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 00:50:08.293754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.293761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.293768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.293780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.293787 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.293794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 00:50:08.293819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.293827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 00:50:08.293834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:50:08.293841 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.293848 | orchestrator | 2025-06-02 00:50:08.293855 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-02 00:50:08.293861 | orchestrator | Monday 02 June 2025 00:48:39 +0000 (0:00:00.701) 0:04:31.556 *********** 2025-06-02 00:50:08.293868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 00:50:08.293879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 00:50:08.293886 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.293893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 00:50:08.293900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 00:50:08.293907 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.293913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 00:50:08.293920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 00:50:08.293927 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.293934 | orchestrator | 2025-06-02 00:50:08.293940 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-02 00:50:08.293947 | orchestrator | Monday 02 June 2025 00:48:40 +0000 (0:00:00.831) 0:04:32.387 *********** 2025-06-02 00:50:08.293954 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.293960 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.293967 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.293974 | orchestrator | 2025-06-02 00:50:08.293980 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-02 00:50:08.293987 | orchestrator | Monday 02 June 2025 00:48:41 +0000 (0:00:01.662) 0:04:34.049 *********** 2025-06-02 00:50:08.293993 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.294000 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.294007 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.294033 | orchestrator | 2025-06-02 00:50:08.294042 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-02 00:50:08.294048 | orchestrator | Monday 02 June 2025 00:48:43 +0000 (0:00:02.004) 0:04:36.054 *********** 2025-06-02 00:50:08.294055 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.294062 | orchestrator | 2025-06-02 00:50:08.294071 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-02 00:50:08.294078 | orchestrator | Monday 02 June 2025 00:48:45 +0000 (0:00:01.353) 0:04:37.408 *********** 2025-06-02 00:50:08.294102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 00:50:08.294111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 00:50:08.294123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 00:50:08.294131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 00:50:08.294156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 00:50:08.294166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 00:50:08.294177 | orchestrator | 2025-06-02 00:50:08.294184 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-02 00:50:08.294191 | orchestrator | Monday 02 June 2025 00:48:50 +0000 (0:00:05.095) 0:04:42.503 *********** 2025-06-02 00:50:08.294208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 00:50:08.294215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 00:50:08.294223 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.294249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 00:50:08.294258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 00:50:08.294270 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.294277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 00:50:08.294284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 00:50:08.294292 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.294299 | orchestrator | 2025-06-02 00:50:08.294306 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-02 00:50:08.294312 | orchestrator | Monday 02 June 2025 00:48:51 +0000 (0:00:01.132) 0:04:43.636 *********** 2025-06-02 00:50:08.294322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 00:50:08.294343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 00:50:08.294352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 00:50:08.294359 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.294370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 00:50:08.294377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 00:50:08.294384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 00:50:08.294391 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.294397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 00:50:08.294404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 00:50:08.294411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 00:50:08.294418 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.294424 | orchestrator | 2025-06-02 00:50:08.294431 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-02 00:50:08.294438 | orchestrator | Monday 02 June 2025 00:48:52 +0000 (0:00:00.843) 0:04:44.479 *********** 2025-06-02 00:50:08.294445 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.294451 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.294458 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.294464 | orchestrator | 2025-06-02 00:50:08.294471 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-02 00:50:08.294478 | orchestrator | Monday 02 June 2025 00:48:52 +0000 (0:00:00.437) 0:04:44.917 *********** 2025-06-02 00:50:08.294484 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.294491 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.294497 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.294504 | orchestrator | 2025-06-02 00:50:08.294511 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-02 00:50:08.294517 | orchestrator | Monday 02 June 2025 00:48:54 +0000 (0:00:01.325) 0:04:46.242 *********** 2025-06-02 00:50:08.294524 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.294530 | orchestrator | 2025-06-02 00:50:08.294537 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-02 00:50:08.294544 | orchestrator | Monday 02 June 2025 00:48:55 +0000 (0:00:01.595) 0:04:47.837 *********** 2025-06-02 00:50:08.294551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 00:50:08.294564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:50:08.294587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:50:08.294610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 00:50:08.294617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:50:08.294624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:50:08.294669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 00:50:08.294676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:50:08.294683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:50:08.294716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 00:50:08.294724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 00:50:08.294731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:50:08.294752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 00:50:08.294770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 00:50:08.294778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:50:08.294799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 00:50:08.294813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 00:50:08.294824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:50:08.294845 | orchestrator | 2025-06-02 00:50:08.294851 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-02 00:50:08.294858 | orchestrator | Monday 02 June 2025 00:48:59 +0000 (0:00:04.075) 0:04:51.913 *********** 2025-06-02 00:50:08.294865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 00:50:08.294872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:50:08.294882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:50:08.294910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 00:50:08.294918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 00:50:08.294925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:50:08.294951 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.294964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 00:50:08.294972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:50:08.294979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.294993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:50:08.295004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 00:50:08.295018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 00:50:08.295026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.295033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.295040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 00:50:08.295050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:50:08.295057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:50:08.295064 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.295071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.295084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.295092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:50:08.295099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 00:50:08.295110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 00:50:08.295117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.295124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:50:08.295139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:50:08.295146 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.295153 | orchestrator | 2025-06-02 00:50:08.295160 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-02 00:50:08.295166 | orchestrator | Monday 02 June 2025 00:49:01 +0000 (0:00:01.224) 0:04:53.137 *********** 2025-06-02 00:50:08.295173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 00:50:08.295180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 00:50:08.295187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 00:50:08.295206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 00:50:08.295214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 00:50:08.295225 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.295232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 00:50:08.295240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 00:50:08.295247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 00:50:08.295253 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.295260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 00:50:08.295267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 00:50:08.295274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 00:50:08.295281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 00:50:08.295287 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.295294 | orchestrator | 2025-06-02 00:50:08.295301 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-02 00:50:08.295308 | orchestrator | Monday 02 June 2025 00:49:02 +0000 (0:00:01.010) 0:04:54.148 *********** 2025-06-02 00:50:08.295314 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.295321 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.295327 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.295334 | orchestrator | 2025-06-02 00:50:08.295341 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-02 00:50:08.295347 | orchestrator | Monday 02 June 2025 00:49:02 +0000 (0:00:00.474) 0:04:54.623 *********** 2025-06-02 00:50:08.295357 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.295364 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.295371 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.295378 | orchestrator | 2025-06-02 00:50:08.295384 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-02 00:50:08.295391 | orchestrator | Monday 02 June 2025 00:49:04 +0000 (0:00:01.599) 0:04:56.222 *********** 2025-06-02 00:50:08.295398 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.295404 | orchestrator | 2025-06-02 00:50:08.295411 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-02 00:50:08.295418 | orchestrator | Monday 02 June 2025 00:49:05 +0000 (0:00:01.692) 0:04:57.915 *********** 2025-06-02 00:50:08.295424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 00:50:08.295437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 00:50:08.295445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 00:50:08.295452 | orchestrator | 2025-06-02 00:50:08.295459 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-02 00:50:08.295465 | orchestrator | Monday 02 June 2025 00:49:08 +0000 (0:00:02.646) 0:05:00.561 *********** 2025-06-02 00:50:08.295515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 00:50:08.295533 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.295541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 00:50:08.295548 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.295555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 00:50:08.295563 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.295569 | orchestrator | 2025-06-02 00:50:08.295576 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-02 00:50:08.295583 | orchestrator | Monday 02 June 2025 00:49:08 +0000 (0:00:00.373) 0:05:00.935 *********** 2025-06-02 00:50:08.295590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 00:50:08.295596 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.295603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 00:50:08.295610 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.295616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 00:50:08.295626 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.295633 | orchestrator | 2025-06-02 00:50:08.295639 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-02 00:50:08.295646 | orchestrator | Monday 02 June 2025 00:49:09 +0000 (0:00:00.935) 0:05:01.871 *********** 2025-06-02 00:50:08.295653 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.295659 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.295666 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.295673 | orchestrator | 2025-06-02 00:50:08.295679 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-02 00:50:08.295686 | orchestrator | Monday 02 June 2025 00:49:10 +0000 (0:00:00.453) 0:05:02.324 *********** 2025-06-02 00:50:08.295700 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.295707 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.295713 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.295720 | orchestrator | 2025-06-02 00:50:08.295727 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-02 00:50:08.295738 | orchestrator | Monday 02 June 2025 00:49:11 +0000 (0:00:01.315) 0:05:03.639 *********** 2025-06-02 00:50:08.295745 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:50:08.295751 | orchestrator | 2025-06-02 00:50:08.295758 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-02 00:50:08.295765 | orchestrator | Monday 02 June 2025 00:49:13 +0000 (0:00:01.692) 0:05:05.332 *********** 2025-06-02 00:50:08.295771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.295779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.295787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.295797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.295812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.295819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 00:50:08.295826 | orchestrator | 2025-06-02 00:50:08.295833 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-02 00:50:08.295840 | orchestrator | Monday 02 June 2025 00:49:19 +0000 (0:00:06.278) 0:05:11.611 *********** 2025-06-02 00:50:08.295847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.295854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.295868 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.295879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.295886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.295893 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.295900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.295907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 00:50:08.295918 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.295925 | orchestrator | 2025-06-02 00:50:08.295932 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-02 00:50:08.295939 | orchestrator | Monday 02 June 2025 00:49:20 +0000 (0:00:00.610) 0:05:12.222 *********** 2025-06-02 00:50:08.295948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 00:50:08.295959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 00:50:08.295966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 00:50:08.295973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 00:50:08.295980 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.295987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 00:50:08.295993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 00:50:08.296000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 00:50:08.296007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 00:50:08.296013 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.296020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 00:50:08.296027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 00:50:08.296034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 00:50:08.296040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 00:50:08.296047 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.296058 | orchestrator | 2025-06-02 00:50:08.296065 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-02 00:50:08.296072 | orchestrator | Monday 02 June 2025 00:49:21 +0000 (0:00:01.555) 0:05:13.777 *********** 2025-06-02 00:50:08.296078 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.296085 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.296092 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.296098 | orchestrator | 2025-06-02 00:50:08.296105 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-02 00:50:08.296111 | orchestrator | Monday 02 June 2025 00:49:22 +0000 (0:00:01.289) 0:05:15.066 *********** 2025-06-02 00:50:08.296118 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.296125 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.296131 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.296138 | orchestrator | 2025-06-02 00:50:08.296144 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-02 00:50:08.296151 | orchestrator | Monday 02 June 2025 00:49:24 +0000 (0:00:02.016) 0:05:17.083 *********** 2025-06-02 00:50:08.296157 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.296164 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.296171 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.296177 | orchestrator | 2025-06-02 00:50:08.296184 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-02 00:50:08.296191 | orchestrator | Monday 02 June 2025 00:49:25 +0000 (0:00:00.323) 0:05:17.406 *********** 2025-06-02 00:50:08.296231 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.296238 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.296246 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.296257 | orchestrator | 2025-06-02 00:50:08.296268 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-02 00:50:08.296279 | orchestrator | Monday 02 June 2025 00:49:25 +0000 (0:00:00.561) 0:05:17.967 *********** 2025-06-02 00:50:08.296290 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.296301 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.296308 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.296315 | orchestrator | 2025-06-02 00:50:08.296322 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-02 00:50:08.296328 | orchestrator | Monday 02 June 2025 00:49:26 +0000 (0:00:00.292) 0:05:18.260 *********** 2025-06-02 00:50:08.296339 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.296346 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.296353 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.296360 | orchestrator | 2025-06-02 00:50:08.296367 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-02 00:50:08.296373 | orchestrator | Monday 02 June 2025 00:49:26 +0000 (0:00:00.307) 0:05:18.567 *********** 2025-06-02 00:50:08.296380 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.296387 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.296393 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.296400 | orchestrator | 2025-06-02 00:50:08.296407 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-02 00:50:08.296413 | orchestrator | Monday 02 June 2025 00:49:26 +0000 (0:00:00.298) 0:05:18.866 *********** 2025-06-02 00:50:08.296420 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.296427 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.296433 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.296440 | orchestrator | 2025-06-02 00:50:08.296447 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-02 00:50:08.296454 | orchestrator | Monday 02 June 2025 00:49:27 +0000 (0:00:00.777) 0:05:19.643 *********** 2025-06-02 00:50:08.296460 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.296467 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.296474 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.296480 | orchestrator | 2025-06-02 00:50:08.296492 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-02 00:50:08.296499 | orchestrator | Monday 02 June 2025 00:49:28 +0000 (0:00:00.639) 0:05:20.282 *********** 2025-06-02 00:50:08.296506 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.296512 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.296519 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.296526 | orchestrator | 2025-06-02 00:50:08.296533 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-02 00:50:08.296539 | orchestrator | Monday 02 June 2025 00:49:28 +0000 (0:00:00.357) 0:05:20.640 *********** 2025-06-02 00:50:08.296546 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.296552 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.296559 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.296566 | orchestrator | 2025-06-02 00:50:08.296573 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-02 00:50:08.296579 | orchestrator | Monday 02 June 2025 00:49:29 +0000 (0:00:01.174) 0:05:21.814 *********** 2025-06-02 00:50:08.296586 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.296595 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.296607 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.296618 | orchestrator | 2025-06-02 00:50:08.296629 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-02 00:50:08.296639 | orchestrator | Monday 02 June 2025 00:49:30 +0000 (0:00:00.863) 0:05:22.678 *********** 2025-06-02 00:50:08.296646 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.296652 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.296659 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.296666 | orchestrator | 2025-06-02 00:50:08.296673 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-02 00:50:08.296679 | orchestrator | Monday 02 June 2025 00:49:31 +0000 (0:00:00.852) 0:05:23.530 *********** 2025-06-02 00:50:08.296686 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.296693 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.296700 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.296706 | orchestrator | 2025-06-02 00:50:08.296712 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-02 00:50:08.296721 | orchestrator | Monday 02 June 2025 00:49:39 +0000 (0:00:08.088) 0:05:31.618 *********** 2025-06-02 00:50:08.296732 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.296743 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.296750 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.296756 | orchestrator | 2025-06-02 00:50:08.296762 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-02 00:50:08.296769 | orchestrator | Monday 02 June 2025 00:49:40 +0000 (0:00:00.697) 0:05:32.316 *********** 2025-06-02 00:50:08.296775 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.296781 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.296787 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.296794 | orchestrator | 2025-06-02 00:50:08.296800 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-02 00:50:08.296806 | orchestrator | Monday 02 June 2025 00:49:49 +0000 (0:00:08.819) 0:05:41.135 *********** 2025-06-02 00:50:08.296812 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.296818 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.296825 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.296831 | orchestrator | 2025-06-02 00:50:08.296837 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-02 00:50:08.296845 | orchestrator | Monday 02 June 2025 00:49:52 +0000 (0:00:03.733) 0:05:44.868 *********** 2025-06-02 00:50:08.296856 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:50:08.296867 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:50:08.296874 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:50:08.296881 | orchestrator | 2025-06-02 00:50:08.296887 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-02 00:50:08.296894 | orchestrator | Monday 02 June 2025 00:50:01 +0000 (0:00:09.150) 0:05:54.019 *********** 2025-06-02 00:50:08.296906 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.296913 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.296919 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.296925 | orchestrator | 2025-06-02 00:50:08.296931 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-02 00:50:08.296938 | orchestrator | Monday 02 June 2025 00:50:02 +0000 (0:00:00.333) 0:05:54.353 *********** 2025-06-02 00:50:08.296944 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.296950 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.296956 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.296963 | orchestrator | 2025-06-02 00:50:08.296972 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-02 00:50:08.296978 | orchestrator | Monday 02 June 2025 00:50:02 +0000 (0:00:00.670) 0:05:55.024 *********** 2025-06-02 00:50:08.296985 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.296991 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.297001 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.297008 | orchestrator | 2025-06-02 00:50:08.297014 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-02 00:50:08.297020 | orchestrator | Monday 02 June 2025 00:50:03 +0000 (0:00:00.324) 0:05:55.348 *********** 2025-06-02 00:50:08.297027 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.297033 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.297039 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.297045 | orchestrator | 2025-06-02 00:50:08.297052 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-02 00:50:08.297058 | orchestrator | Monday 02 June 2025 00:50:03 +0000 (0:00:00.316) 0:05:55.665 *********** 2025-06-02 00:50:08.297064 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.297071 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.297077 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.297083 | orchestrator | 2025-06-02 00:50:08.297089 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-02 00:50:08.297096 | orchestrator | Monday 02 June 2025 00:50:03 +0000 (0:00:00.307) 0:05:55.973 *********** 2025-06-02 00:50:08.297102 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:50:08.297108 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:50:08.297114 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:50:08.297121 | orchestrator | 2025-06-02 00:50:08.297127 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-02 00:50:08.297133 | orchestrator | Monday 02 June 2025 00:50:04 +0000 (0:00:00.631) 0:05:56.604 *********** 2025-06-02 00:50:08.297139 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.297146 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.297152 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.297158 | orchestrator | 2025-06-02 00:50:08.297164 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-02 00:50:08.297171 | orchestrator | Monday 02 June 2025 00:50:05 +0000 (0:00:00.931) 0:05:57.536 *********** 2025-06-02 00:50:08.297177 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:50:08.297183 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:50:08.297189 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:50:08.297206 | orchestrator | 2025-06-02 00:50:08.297213 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:50:08.297219 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 00:50:08.297226 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 00:50:08.297232 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 00:50:08.297242 | orchestrator | 2025-06-02 00:50:08.297248 | orchestrator | 2025-06-02 00:50:08.297255 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:50:08.297261 | orchestrator | Monday 02 June 2025 00:50:06 +0000 (0:00:00.847) 0:05:58.384 *********** 2025-06-02 00:50:08.297267 | orchestrator | =============================================================================== 2025-06-02 00:50:08.297273 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.15s 2025-06-02 00:50:08.297279 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.82s 2025-06-02 00:50:08.297286 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.09s 2025-06-02 00:50:08.297292 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.28s 2025-06-02 00:50:08.297298 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.34s 2025-06-02 00:50:08.297304 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.15s 2025-06-02 00:50:08.297310 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.10s 2025-06-02 00:50:08.297317 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.85s 2025-06-02 00:50:08.297323 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.59s 2025-06-02 00:50:08.297329 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.33s 2025-06-02 00:50:08.297335 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.28s 2025-06-02 00:50:08.297341 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.19s 2025-06-02 00:50:08.297347 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.11s 2025-06-02 00:50:08.297354 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.08s 2025-06-02 00:50:08.297360 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.04s 2025-06-02 00:50:08.297366 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.83s 2025-06-02 00:50:08.297372 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.81s 2025-06-02 00:50:08.297378 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.73s 2025-06-02 00:50:08.297384 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.71s 2025-06-02 00:50:08.297391 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.49s 2025-06-02 00:50:11.309685 | orchestrator | 2025-06-02 00:50:11 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:11.310701 | orchestrator | 2025-06-02 00:50:11 | INFO  | Task c5ab2128-9e41-4cf8-bd92-d55dc069d3c5 is in state SUCCESS 2025-06-02 00:50:11.312337 | orchestrator | 2025-06-02 00:50:11 | INFO  | Task c2c30f6f-e20c-46b0-b667-aff12a5ca853 is in state STARTED 2025-06-02 00:50:11.313690 | orchestrator | 2025-06-02 00:50:11 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:11.313938 | orchestrator | 2025-06-02 00:50:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:14.354780 | orchestrator | 2025-06-02 00:50:14 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:14.354870 | orchestrator | 2025-06-02 00:50:14 | INFO  | Task c2c30f6f-e20c-46b0-b667-aff12a5ca853 is in state STARTED 2025-06-02 00:50:14.356582 | orchestrator | 2025-06-02 00:50:14 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:14.356616 | orchestrator | 2025-06-02 00:50:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:17.388331 | orchestrator | 2025-06-02 00:50:17 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:17.388562 | orchestrator | 2025-06-02 00:50:17 | INFO  | Task c2c30f6f-e20c-46b0-b667-aff12a5ca853 is in state SUCCESS 2025-06-02 00:50:17.388630 | orchestrator | 2025-06-02 00:50:17 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:17.388645 | orchestrator | 2025-06-02 00:50:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:20.435387 | orchestrator | 2025-06-02 00:50:20 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:20.438592 | orchestrator | 2025-06-02 00:50:20 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:20.438638 | orchestrator | 2025-06-02 00:50:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:23.471438 | orchestrator | 2025-06-02 00:50:23 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:23.471530 | orchestrator | 2025-06-02 00:50:23 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:23.471552 | orchestrator | 2025-06-02 00:50:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:26.522663 | orchestrator | 2025-06-02 00:50:26 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:26.525284 | orchestrator | 2025-06-02 00:50:26 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:26.525329 | orchestrator | 2025-06-02 00:50:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:29.578134 | orchestrator | 2025-06-02 00:50:29 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:29.579746 | orchestrator | 2025-06-02 00:50:29 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:29.579798 | orchestrator | 2025-06-02 00:50:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:32.630167 | orchestrator | 2025-06-02 00:50:32 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:32.630279 | orchestrator | 2025-06-02 00:50:32 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:32.631444 | orchestrator | 2025-06-02 00:50:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:35.672223 | orchestrator | 2025-06-02 00:50:35 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:35.674008 | orchestrator | 2025-06-02 00:50:35 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:35.674092 | orchestrator | 2025-06-02 00:50:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:38.714012 | orchestrator | 2025-06-02 00:50:38 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:38.716994 | orchestrator | 2025-06-02 00:50:38 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:38.717056 | orchestrator | 2025-06-02 00:50:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:41.764212 | orchestrator | 2025-06-02 00:50:41 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:41.765346 | orchestrator | 2025-06-02 00:50:41 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:41.765374 | orchestrator | 2025-06-02 00:50:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:44.807780 | orchestrator | 2025-06-02 00:50:44 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:44.808390 | orchestrator | 2025-06-02 00:50:44 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:44.808433 | orchestrator | 2025-06-02 00:50:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:47.847312 | orchestrator | 2025-06-02 00:50:47 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:47.847390 | orchestrator | 2025-06-02 00:50:47 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:47.847396 | orchestrator | 2025-06-02 00:50:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:50.894593 | orchestrator | 2025-06-02 00:50:50 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:50.895471 | orchestrator | 2025-06-02 00:50:50 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:50.895589 | orchestrator | 2025-06-02 00:50:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:53.949375 | orchestrator | 2025-06-02 00:50:53 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:53.949575 | orchestrator | 2025-06-02 00:50:53 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:53.949591 | orchestrator | 2025-06-02 00:50:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:50:57.008580 | orchestrator | 2025-06-02 00:50:57 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:50:57.008688 | orchestrator | 2025-06-02 00:50:57 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:50:57.008703 | orchestrator | 2025-06-02 00:50:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:00.062633 | orchestrator | 2025-06-02 00:51:00 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:00.065145 | orchestrator | 2025-06-02 00:51:00 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:00.065196 | orchestrator | 2025-06-02 00:51:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:03.109164 | orchestrator | 2025-06-02 00:51:03 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:03.109446 | orchestrator | 2025-06-02 00:51:03 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:03.110482 | orchestrator | 2025-06-02 00:51:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:06.161092 | orchestrator | 2025-06-02 00:51:06 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:06.162314 | orchestrator | 2025-06-02 00:51:06 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:06.162542 | orchestrator | 2025-06-02 00:51:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:09.204155 | orchestrator | 2025-06-02 00:51:09 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:09.205835 | orchestrator | 2025-06-02 00:51:09 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:09.206609 | orchestrator | 2025-06-02 00:51:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:12.255626 | orchestrator | 2025-06-02 00:51:12 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:12.257479 | orchestrator | 2025-06-02 00:51:12 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:12.257935 | orchestrator | 2025-06-02 00:51:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:15.301437 | orchestrator | 2025-06-02 00:51:15 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:15.305636 | orchestrator | 2025-06-02 00:51:15 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:15.305749 | orchestrator | 2025-06-02 00:51:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:18.344647 | orchestrator | 2025-06-02 00:51:18 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:18.346271 | orchestrator | 2025-06-02 00:51:18 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:18.346381 | orchestrator | 2025-06-02 00:51:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:21.406669 | orchestrator | 2025-06-02 00:51:21 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:21.408465 | orchestrator | 2025-06-02 00:51:21 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:21.408584 | orchestrator | 2025-06-02 00:51:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:24.451047 | orchestrator | 2025-06-02 00:51:24 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:24.452067 | orchestrator | 2025-06-02 00:51:24 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:24.452097 | orchestrator | 2025-06-02 00:51:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:27.502340 | orchestrator | 2025-06-02 00:51:27 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:27.503193 | orchestrator | 2025-06-02 00:51:27 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:27.503298 | orchestrator | 2025-06-02 00:51:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:30.547506 | orchestrator | 2025-06-02 00:51:30 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:30.549278 | orchestrator | 2025-06-02 00:51:30 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:30.549318 | orchestrator | 2025-06-02 00:51:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:33.588449 | orchestrator | 2025-06-02 00:51:33 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:33.590650 | orchestrator | 2025-06-02 00:51:33 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:33.590688 | orchestrator | 2025-06-02 00:51:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:36.632686 | orchestrator | 2025-06-02 00:51:36 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:36.634706 | orchestrator | 2025-06-02 00:51:36 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:36.634759 | orchestrator | 2025-06-02 00:51:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:39.675957 | orchestrator | 2025-06-02 00:51:39 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:39.677317 | orchestrator | 2025-06-02 00:51:39 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:39.677348 | orchestrator | 2025-06-02 00:51:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:42.721740 | orchestrator | 2025-06-02 00:51:42 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:42.723111 | orchestrator | 2025-06-02 00:51:42 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:42.723343 | orchestrator | 2025-06-02 00:51:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:45.766589 | orchestrator | 2025-06-02 00:51:45 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:45.770617 | orchestrator | 2025-06-02 00:51:45 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:45.770656 | orchestrator | 2025-06-02 00:51:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:48.808610 | orchestrator | 2025-06-02 00:51:48 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:48.809460 | orchestrator | 2025-06-02 00:51:48 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:48.809491 | orchestrator | 2025-06-02 00:51:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:51.858290 | orchestrator | 2025-06-02 00:51:51 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:51.859477 | orchestrator | 2025-06-02 00:51:51 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:51.859518 | orchestrator | 2025-06-02 00:51:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:54.904560 | orchestrator | 2025-06-02 00:51:54 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:54.908371 | orchestrator | 2025-06-02 00:51:54 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:54.908413 | orchestrator | 2025-06-02 00:51:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:51:57.960107 | orchestrator | 2025-06-02 00:51:57 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:51:57.962134 | orchestrator | 2025-06-02 00:51:57 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state STARTED 2025-06-02 00:51:57.962172 | orchestrator | 2025-06-02 00:51:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:01.022321 | orchestrator | 2025-06-02 00:52:01 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:01.022412 | orchestrator | 2025-06-02 00:52:01 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:01.023320 | orchestrator | 2025-06-02 00:52:01 | INFO  | Task ac6d63aa-190f-4827-ad3f-5d4d1ba84625 is in state SUCCESS 2025-06-02 00:52:01.023355 | orchestrator | 2025-06-02 00:52:01.023370 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2025-06-02 00:52:01.023382 | orchestrator | -vvvv to see details 2025-06-02 00:52:01.023396 | orchestrator | 2025-06-02 00:52:01.023408 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:52:01.023420 | orchestrator | 2025-06-02 00:52:01.023431 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:52:01.023442 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.023454 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.023465 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.023476 | orchestrator | 2025-06-02 00:52:01.023488 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:52:01.023499 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-02 00:52:01.023511 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-02 00:52:01.023522 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-02 00:52:01.023533 | orchestrator | 2025-06-02 00:52:01.023544 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-02 00:52:01.023555 | orchestrator | 2025-06-02 00:52:01.023566 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 00:52:01.023577 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-06-02 00:52:01.023588 | orchestrator | 2025-06-02 00:52:01.023599 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-02 00:52:01.023636 | orchestrator | failed: [testbed-node-2] (item={'name': 'vm.max_map_count', 'value': 262144}) => {"ansible_loop_var": "item", "item": {"name": "vm.max_map_count", "value": 262144}, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-02 00:52:01.023652 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"ansible_loop_var": "item", "item": {"name": "vm.max_map_count", "value": 262144}, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true}], "unreachable": true} 2025-06-02 00:52:01.023666 | orchestrator | failed: [testbed-node-0] (item={'name': 'vm.max_map_count', 'value': 262144}) => {"ansible_loop_var": "item", "item": {"name": "vm.max_map_count", "value": 262144}, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-02 00:52:01.023678 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"ansible_loop_var": "item", "item": {"name": "vm.max_map_count", "value": 262144}, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true}], "unreachable": true} 2025-06-02 00:52:01.023689 | orchestrator | failed: [testbed-node-1] (item={'name': 'vm.max_map_count', 'value': 262144}) => {"ansible_loop_var": "item", "item": {"name": "vm.max_map_count", "value": 262144}, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-02 00:52:01.023713 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"ansible_loop_var": "item", "item": {"name": "vm.max_map_count", "value": 262144}, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true}], "unreachable": true} 2025-06-02 00:52:01.023726 | orchestrator | 2025-06-02 00:52:01.023737 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:52:01.023748 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:52:01.023770 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:52:01.023782 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:52:01.023793 | orchestrator | 2025-06-02 00:52:01.023804 | orchestrator | 2025-06-02 00:52:01.023815 | orchestrator | None 2025-06-02 00:52:01.025094 | orchestrator | 2025-06-02 00:52:01.025121 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-02 00:52:01.025133 | orchestrator | 2025-06-02 00:52:01.025145 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 00:52:01.025156 | orchestrator | Monday 02 June 2025 00:41:42 +0000 (0:00:00.624) 0:00:00.624 *********** 2025-06-02 00:52:01.025181 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.025192 | orchestrator | 2025-06-02 00:52:01.025203 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 00:52:01.025215 | orchestrator | Monday 02 June 2025 00:41:43 +0000 (0:00:00.974) 0:00:01.598 *********** 2025-06-02 00:52:01.025226 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.025237 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.025422 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.025440 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.026277 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.026311 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.026327 | orchestrator | 2025-06-02 00:52:01.026342 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 00:52:01.026357 | orchestrator | Monday 02 June 2025 00:41:45 +0000 (0:00:01.528) 0:00:03.127 *********** 2025-06-02 00:52:01.026371 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.026385 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.026399 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.026412 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.026425 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.026438 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.026452 | orchestrator | 2025-06-02 00:52:01.026463 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 00:52:01.026474 | orchestrator | Monday 02 June 2025 00:41:45 +0000 (0:00:00.903) 0:00:04.030 *********** 2025-06-02 00:52:01.026485 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.026496 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.026507 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.026518 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.026529 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.026540 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.026551 | orchestrator | 2025-06-02 00:52:01.026562 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 00:52:01.026573 | orchestrator | Monday 02 June 2025 00:41:46 +0000 (0:00:01.018) 0:00:05.049 *********** 2025-06-02 00:52:01.026584 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.026595 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.026606 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.026617 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.026628 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.026639 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.026650 | orchestrator | 2025-06-02 00:52:01.026661 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 00:52:01.026672 | orchestrator | Monday 02 June 2025 00:41:47 +0000 (0:00:00.648) 0:00:05.698 *********** 2025-06-02 00:52:01.026683 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.026694 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.026705 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.026716 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.026727 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.026738 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.026749 | orchestrator | 2025-06-02 00:52:01.026760 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 00:52:01.026771 | orchestrator | Monday 02 June 2025 00:41:48 +0000 (0:00:00.740) 0:00:06.438 *********** 2025-06-02 00:52:01.026782 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.026793 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.026804 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.026815 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.026826 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.026837 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.026848 | orchestrator | 2025-06-02 00:52:01.026859 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 00:52:01.026870 | orchestrator | Monday 02 June 2025 00:41:49 +0000 (0:00:00.924) 0:00:07.363 *********** 2025-06-02 00:52:01.026908 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.026920 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.026932 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.026981 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.026994 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.027005 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.027016 | orchestrator | 2025-06-02 00:52:01.027027 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 00:52:01.027039 | orchestrator | Monday 02 June 2025 00:41:50 +0000 (0:00:00.766) 0:00:08.129 *********** 2025-06-02 00:52:01.027049 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.027060 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.027071 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.027082 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.027093 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.027104 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.027115 | orchestrator | 2025-06-02 00:52:01.027138 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 00:52:01.027150 | orchestrator | Monday 02 June 2025 00:41:51 +0000 (0:00:01.282) 0:00:09.411 *********** 2025-06-02 00:52:01.027161 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 00:52:01.027172 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 00:52:01.027184 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 00:52:01.027195 | orchestrator | 2025-06-02 00:52:01.027206 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 00:52:01.027216 | orchestrator | Monday 02 June 2025 00:41:52 +0000 (0:00:00.752) 0:00:10.163 *********** 2025-06-02 00:52:01.027228 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.027239 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.027250 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.027261 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.027272 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.027283 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.027294 | orchestrator | 2025-06-02 00:52:01.027323 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 00:52:01.027335 | orchestrator | Monday 02 June 2025 00:41:52 +0000 (0:00:00.941) 0:00:11.105 *********** 2025-06-02 00:52:01.027347 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 00:52:01.027358 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 00:52:01.027369 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 00:52:01.027380 | orchestrator | 2025-06-02 00:52:01.027391 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 00:52:01.027402 | orchestrator | Monday 02 June 2025 00:41:55 +0000 (0:00:02.862) 0:00:13.968 *********** 2025-06-02 00:52:01.027413 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 00:52:01.027424 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 00:52:01.027435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 00:52:01.027447 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.027458 | orchestrator | 2025-06-02 00:52:01.027469 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 00:52:01.027481 | orchestrator | Monday 02 June 2025 00:41:56 +0000 (0:00:01.056) 0:00:15.025 *********** 2025-06-02 00:52:01.027493 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.027520 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.027550 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.027562 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.027573 | orchestrator | 2025-06-02 00:52:01.027585 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 00:52:01.027596 | orchestrator | Monday 02 June 2025 00:41:57 +0000 (0:00:00.910) 0:00:15.935 *********** 2025-06-02 00:52:01.027609 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.027622 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.027634 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.027645 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.027656 | orchestrator | 2025-06-02 00:52:01.027667 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 00:52:01.027678 | orchestrator | Monday 02 June 2025 00:41:58 +0000 (0:00:00.429) 0:00:16.364 *********** 2025-06-02 00:52:01.027696 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 00:41:53.642113', 'end': '2025-06-02 00:41:53.917813', 'delta': '0:00:00.275700', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.027718 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 00:41:54.605462', 'end': '2025-06-02 00:41:54.876117', 'delta': '0:00:00.270655', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.027731 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 00:41:55.431914', 'end': '2025-06-02 00:41:55.679410', 'delta': '0:00:00.247496', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.027750 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.027761 | orchestrator | 2025-06-02 00:52:01.027773 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 00:52:01.027784 | orchestrator | Monday 02 June 2025 00:41:58 +0000 (0:00:00.257) 0:00:16.622 *********** 2025-06-02 00:52:01.027795 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.027806 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.027817 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.027829 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.027840 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.027851 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.027862 | orchestrator | 2025-06-02 00:52:01.027873 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 00:52:01.027885 | orchestrator | Monday 02 June 2025 00:41:59 +0000 (0:00:01.271) 0:00:17.893 *********** 2025-06-02 00:52:01.027896 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.027907 | orchestrator | 2025-06-02 00:52:01.027918 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 00:52:01.027929 | orchestrator | Monday 02 June 2025 00:42:00 +0000 (0:00:00.672) 0:00:18.566 *********** 2025-06-02 00:52:01.027963 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.027984 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.028004 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.028022 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.028034 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.028045 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.028056 | orchestrator | 2025-06-02 00:52:01.028067 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 00:52:01.028078 | orchestrator | Monday 02 June 2025 00:42:01 +0000 (0:00:01.163) 0:00:19.729 *********** 2025-06-02 00:52:01.028089 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.028100 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.028111 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.028121 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.028132 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.028143 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.028154 | orchestrator | 2025-06-02 00:52:01.028165 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 00:52:01.028176 | orchestrator | Monday 02 June 2025 00:42:03 +0000 (0:00:01.464) 0:00:21.194 *********** 2025-06-02 00:52:01.028187 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.028197 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.028208 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.028219 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.028230 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.028241 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.028252 | orchestrator | 2025-06-02 00:52:01.028263 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 00:52:01.028287 | orchestrator | Monday 02 June 2025 00:42:04 +0000 (0:00:01.014) 0:00:22.209 *********** 2025-06-02 00:52:01.028299 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.028310 | orchestrator | 2025-06-02 00:52:01.028321 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 00:52:01.028332 | orchestrator | Monday 02 June 2025 00:42:04 +0000 (0:00:00.154) 0:00:22.364 *********** 2025-06-02 00:52:01.028351 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.028362 | orchestrator | 2025-06-02 00:52:01.028373 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 00:52:01.028384 | orchestrator | Monday 02 June 2025 00:42:04 +0000 (0:00:00.316) 0:00:22.680 *********** 2025-06-02 00:52:01.028395 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.028406 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.028417 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.028428 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.028439 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.028450 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.028461 | orchestrator | 2025-06-02 00:52:01.028473 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 00:52:01.028490 | orchestrator | Monday 02 June 2025 00:42:05 +0000 (0:00:00.755) 0:00:23.436 *********** 2025-06-02 00:52:01.028501 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.028512 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.028523 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.028534 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.028545 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.028556 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.028567 | orchestrator | 2025-06-02 00:52:01.028579 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 00:52:01.028590 | orchestrator | Monday 02 June 2025 00:42:06 +0000 (0:00:01.306) 0:00:24.742 *********** 2025-06-02 00:52:01.028601 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.028611 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.028623 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.028634 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.028645 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.028656 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.028667 | orchestrator | 2025-06-02 00:52:01.028678 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 00:52:01.028689 | orchestrator | Monday 02 June 2025 00:42:07 +0000 (0:00:00.749) 0:00:25.492 *********** 2025-06-02 00:52:01.028700 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.028711 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.028722 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.028732 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.028743 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.028754 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.028765 | orchestrator | 2025-06-02 00:52:01.028776 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 00:52:01.028787 | orchestrator | Monday 02 June 2025 00:42:08 +0000 (0:00:00.810) 0:00:26.302 *********** 2025-06-02 00:52:01.028798 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.028809 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.028820 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.028831 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.028842 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.028853 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.028864 | orchestrator | 2025-06-02 00:52:01.028875 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 00:52:01.028886 | orchestrator | Monday 02 June 2025 00:42:08 +0000 (0:00:00.633) 0:00:26.935 *********** 2025-06-02 00:52:01.028897 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.028908 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.028919 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.028930 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.028962 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.028974 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.028985 | orchestrator | 2025-06-02 00:52:01.028996 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 00:52:01.029014 | orchestrator | Monday 02 June 2025 00:42:09 +0000 (0:00:00.802) 0:00:27.738 *********** 2025-06-02 00:52:01.029025 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.029036 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.029047 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.029058 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.029069 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.029080 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.029091 | orchestrator | 2025-06-02 00:52:01.029102 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 00:52:01.029113 | orchestrator | Monday 02 June 2025 00:42:10 +0000 (0:00:00.876) 0:00:28.614 *********** 2025-06-02 00:52:01.029124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74', 'scsi-SQEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.029282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.029294 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.029306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7', 'scsi-SQEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.029437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.029449 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.029460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d', 'scsi-SQEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.029598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.029610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3a2aacf8--31c8--546a--a559--f7f9618b27d4-osd--block--3a2aacf8--31c8--546a--a559--f7f9618b27d4', 'dm-uuid-LVM-NNCStWpcr9tenQmNxri7LASeTRMcEv6AoOnwkS7N482btF35qnYY416n1aLIbtP8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1905453d--e612--5c47--8424--6bc4888ba216-osd--block--1905453d--e612--5c47--8424--6bc4888ba216', 'dm-uuid-LVM-bmI6VfEwWdXz2xP9C2LcPSFfTFAwKWN8tG6LgHTqOcSBqmax2tJJF1Q3vaj0PA1J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029677 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.029689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--89fe9f69--ec16--58f3--8212--bc080cf4c28c-osd--block--89fe9f69--ec16--58f3--8212--bc080cf4c28c', 'dm-uuid-LVM-ognnUHwkOr4oV4bQOavTnlv8gd9RlZxuN1XyIq7r9rNVLx10b02DvAy6y4irDY5P'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.029836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3a308c11--b64c--503e--b49b--4b3a12050ecf-osd--block--3a308c11--b64c--503e--b49b--4b3a12050ecf', 'dm-uuid-LVM-lh09BsjJdtc94H2oQxQdRnzKfwmOYgWsiBp0OoPA26YaAK1R1G3gj2Iu4RnsSePy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3a2aacf8--31c8--546a--a559--f7f9618b27d4-osd--block--3a2aacf8--31c8--546a--a559--f7f9618b27d4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z2iHD6-ULgy-BkLr-xDEJ-3xhd-8Hdb-xZrL0x', 'scsi-0QEMU_QEMU_HARDDISK_e440bdc3-1867-4817-abfb-a8a36f681931', 'scsi-SQEMU_QEMU_HARDDISK_e440bdc3-1867-4817-abfb-a8a36f681931'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.029874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1905453d--e612--5c47--8424--6bc4888ba216-osd--block--1905453d--e612--5c47--8424--6bc4888ba216'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-t7nJSk-psuA-nDo5-CXH5-9b9Q-apxe-e719j6', 'scsi-0QEMU_QEMU_HARDDISK_ee21d93f-61cf-428c-a6c4-8efe670724e1', 'scsi-SQEMU_QEMU_HARDDISK_ee21d93f-61cf-428c-a6c4-8efe670724e1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.029898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa9fa188-919c-438b-be4f-34a22a00bea2', 'scsi-SQEMU_QEMU_HARDDISK_fa9fa188-919c-438b-be4f-34a22a00bea2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.029925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.029979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.029998 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.030009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030080 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644-osd--block--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644', 'dm-uuid-LVM-avSIsvovV4pOZUqGYx7LvX2X2ezUL6JLR2N4CiJgOJgCbEu5wAT023vZOdeKr6HB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17a6e190--aa70--5b53--9f6a--9d016360bd22-osd--block--17a6e190--aa70--5b53--9f6a--9d016360bd22', 'dm-uuid-LVM-fS3KfuMJUAk8TssYvM3o8inwlApLtYRI1qvo6Tzwi5hYLdJhvnuVrwB79sNe3JWX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part1', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part14', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part15', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part16', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.030171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--89fe9f69--ec16--58f3--8212--bc080cf4c28c-osd--block--89fe9f69--ec16--58f3--8212--bc080cf4c28c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Pd0Enh-ZyGr-1RZz-WfSA-5okV-5eB7-miMFkN', 'scsi-0QEMU_QEMU_HARDDISK_ba4d1aaf-78c8-4549-a686-67bb8e50d69d', 'scsi-SQEMU_QEMU_HARDDISK_ba4d1aaf-78c8-4549-a686-67bb8e50d69d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.030199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3a308c11--b64c--503e--b49b--4b3a12050ecf-osd--block--3a308c11--b64c--503e--b49b--4b3a12050ecf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Q2xvrl-5jo8-WvCY-nGjg-aqiH-iAhZ-K3eckw', 'scsi-0QEMU_QEMU_HARDDISK_4ace89ec-5901-4181-9aea-4e5d559a0cfd', 'scsi-SQEMU_QEMU_HARDDISK_4ace89ec-5901-4181-9aea-4e5d559a0cfd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.030236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbd34322-9953-4267-815d-84376d8605a5', 'scsi-SQEMU_QEMU_HARDDISK_bbd34322-9953-4267-815d-84376d8605a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.030259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.030271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030282 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.030293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:52:01.030329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.030352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644-osd--block--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GF3yfX-GBy3-gDda-qdDG-hLeU-qQZm-CrHybA', 'scsi-0QEMU_QEMU_HARDDISK_e8887d11-63ae-4566-a11b-b67b45b1443e', 'scsi-SQEMU_QEMU_HARDDISK_e8887d11-63ae-4566-a11b-b67b45b1443e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.030365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--17a6e190--aa70--5b53--9f6a--9d016360bd22-osd--block--17a6e190--aa70--5b53--9f6a--9d016360bd22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ku3b3c-npBj-z1Yj-LXYu-ex50-hTYq-uEKlYj', 'scsi-0QEMU_QEMU_HARDDISK_9dfee06d-65ab-44da-8413-6b371a116172', 'scsi-SQEMU_QEMU_HARDDISK_9dfee06d-65ab-44da-8413-6b371a116172'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.030382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e09092a-0107-49ca-ae5a-eacfcf6197eb', 'scsi-SQEMU_QEMU_HARDDISK_0e09092a-0107-49ca-ae5a-eacfcf6197eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.030400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:52:01.030417 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.030429 | orchestrator | 2025-06-02 00:52:01.030441 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 00:52:01.030452 | orchestrator | Monday 02 June 2025 00:42:12 +0000 (0:00:01.925) 0:00:30.540 *********** 2025-06-02 00:52:01.030464 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030476 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030488 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030499 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030511 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030532 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030550 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030563 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030580 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74', 'scsi-SQEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1a2bbbe-9362-43b3-96f2-dcfed4bebf74-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030605 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030618 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030629 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030641 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030653 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030664 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030686 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030704 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030717 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030729 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7', 'scsi-SQEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_9a277547-9c69-455c-ba34-e403b5f8d4c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030752 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030764 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.030781 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030794 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030805 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030817 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030829 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.030850 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031137 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031161 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031174 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d', 'scsi-SQEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d9278158-5488-4924-af7c-e9a9bf543d8d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031203 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031215 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.031301 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3a2aacf8--31c8--546a--a559--f7f9618b27d4-osd--block--3a2aacf8--31c8--546a--a559--f7f9618b27d4', 'dm-uuid-LVM-NNCStWpcr9tenQmNxri7LASeTRMcEv6AoOnwkS7N482btF35qnYY416n1aLIbtP8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031318 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1905453d--e612--5c47--8424--6bc4888ba216-osd--block--1905453d--e612--5c47--8424--6bc4888ba216', 'dm-uuid-LVM-bmI6VfEwWdXz2xP9C2LcPSFfTFAwKWN8tG6LgHTqOcSBqmax2tJJF1Q3vaj0PA1J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031342 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.031354 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031378 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031469 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031497 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031574 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031690 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031710 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--89fe9f69--ec16--58f3--8212--bc080cf4c28c-osd--block--89fe9f69--ec16--58f3--8212--bc080cf4c28c', 'dm-uuid-LVM-ognnUHwkOr4oV4bQOavTnlv8gd9RlZxuN1XyIq7r9rNVLx10b02DvAy6y4irDY5P'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031722 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3a2aacf8--31c8--546a--a559--f7f9618b27d4-osd--block--3a2aacf8--31c8--546a--a559--f7f9618b27d4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z2iHD6-ULgy-BkLr-xDEJ-3xhd-8Hdb-xZrL0x', 'scsi-0QEMU_QEMU_HARDDISK_e440bdc3-1867-4817-abfb-a8a36f681931', 'scsi-SQEMU_QEMU_HARDDISK_e440bdc3-1867-4817-abfb-a8a36f681931'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031742 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1905453d--e612--5c47--8424--6bc4888ba216-osd--block--1905453d--e612--5c47--8424--6bc4888ba216'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-t7nJSk-psuA-nDo5-CXH5-9b9Q-apxe-e719j6', 'scsi-0QEMU_QEMU_HARDDISK_ee21d93f-61cf-428c-a6c4-8efe670724e1', 'scsi-SQEMU_QEMU_HARDDISK_ee21d93f-61cf-428c-a6c4-8efe670724e1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031759 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3a308c11--b64c--503e--b49b--4b3a12050ecf-osd--block--3a308c11--b64c--503e--b49b--4b3a12050ecf', 'dm-uuid-LVM-lh09BsjJdtc94H2oQxQdRnzKfwmOYgWsiBp0OoPA26YaAK1R1G3gj2Iu4RnsSePy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa9fa188-919c-438b-be4f-34a22a00bea2', 'scsi-SQEMU_QEMU_HARDDISK_fa9fa188-919c-438b-be4f-34a22a00bea2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031856 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031867 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031886 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031898 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.031910 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.031926 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032057 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032078 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032090 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032101 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032199 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part1', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part14', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part15', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part16', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032218 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--89fe9f69--ec16--58f3--8212--bc080cf4c28c-osd--block--89fe9f69--ec16--58f3--8212--bc080cf4c28c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Pd0Enh-ZyGr-1RZz-WfSA-5okV-5eB7-miMFkN', 'scsi-0QEMU_QEMU_HARDDISK_ba4d1aaf-78c8-4549-a686-67bb8e50d69d', 'scsi-SQEMU_QEMU_HARDDISK_ba4d1aaf-78c8-4549-a686-67bb8e50d69d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032230 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3a308c11--b64c--503e--b49b--4b3a12050ecf-osd--block--3a308c11--b64c--503e--b49b--4b3a12050ecf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Q2xvrl-5jo8-WvCY-nGjg-aqiH-iAhZ-K3eckw', 'scsi-0QEMU_QEMU_HARDDISK_4ace89ec-5901-4181-9aea-4e5d559a0cfd', 'scsi-SQEMU_QEMU_HARDDISK_4ace89ec-5901-4181-9aea-4e5d559a0cfd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032250 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbd34322-9953-4267-815d-84376d8605a5', 'scsi-SQEMU_QEMU_HARDDISK_bbd34322-9953-4267-815d-84376d8605a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032266 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644-osd--block--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644', 'dm-uuid-LVM-avSIsvovV4pOZUqGYx7LvX2X2ezUL6JLR2N4CiJgOJgCbEu5wAT023vZOdeKr6HB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032369 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032389 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.032401 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17a6e190--aa70--5b53--9f6a--9d016360bd22-osd--block--17a6e190--aa70--5b53--9f6a--9d016360bd22', 'dm-uuid-LVM-fS3KfuMJUAk8TssYvM3o8inwlApLtYRI1qvo6Tzwi5hYLdJhvnuVrwB79sNe3JWX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032421 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032444 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032461 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032542 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032558 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032570 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032588 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032677 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032696 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644-osd--block--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GF3yfX-GBy3-gDda-qdDG-hLeU-qQZm-CrHybA', 'scsi-0QEMU_QEMU_HARDDISK_e8887d11-63ae-4566-a11b-b67b45b1443e', 'scsi-SQEMU_QEMU_HARDDISK_e8887d11-63ae-4566-a11b-b67b45b1443e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032715 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--17a6e190--aa70--5b53--9f6a--9d016360bd22-osd--block--17a6e190--aa70--5b53--9f6a--9d016360bd22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ku3b3c-npBj-z1Yj-LXYu-ex50-hTYq-uEKlYj', 'scsi-0QEMU_QEMU_HARDDISK_9dfee06d-65ab-44da-8413-6b371a116172', 'scsi-SQEMU_QEMU_HARDDISK_9dfee06d-65ab-44da-8413-6b371a116172'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032726 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e09092a-0107-49ca-ae5a-eacfcf6197eb', 'scsi-SQEMU_QEMU_HARDDISK_0e09092a-0107-49ca-ae5a-eacfcf6197eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032742 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:52:01.032754 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.032765 | orchestrator | 2025-06-02 00:52:01.032777 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 00:52:01.032788 | orchestrator | Monday 02 June 2025 00:42:13 +0000 (0:00:01.238) 0:00:31.778 *********** 2025-06-02 00:52:01.032800 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.032811 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.032822 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.032900 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.032916 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.032927 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.032997 | orchestrator | 2025-06-02 00:52:01.033012 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 00:52:01.033024 | orchestrator | Monday 02 June 2025 00:42:15 +0000 (0:00:01.656) 0:00:33.434 *********** 2025-06-02 00:52:01.033035 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.033046 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.033057 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.033068 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.033079 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.033099 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.033110 | orchestrator | 2025-06-02 00:52:01.033121 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 00:52:01.033132 | orchestrator | Monday 02 June 2025 00:42:16 +0000 (0:00:00.692) 0:00:34.126 *********** 2025-06-02 00:52:01.033143 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.033154 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.033165 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.033176 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.033187 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.033198 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.033209 | orchestrator | 2025-06-02 00:52:01.033220 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 00:52:01.033231 | orchestrator | Monday 02 June 2025 00:42:16 +0000 (0:00:00.793) 0:00:34.920 *********** 2025-06-02 00:52:01.033242 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.033253 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.033264 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.033275 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.033285 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.033296 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.033307 | orchestrator | 2025-06-02 00:52:01.033318 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 00:52:01.033329 | orchestrator | Monday 02 June 2025 00:42:17 +0000 (0:00:00.684) 0:00:35.604 *********** 2025-06-02 00:52:01.033340 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.033350 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.033361 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.033372 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.033383 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.033394 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.033405 | orchestrator | 2025-06-02 00:52:01.033415 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 00:52:01.033426 | orchestrator | Monday 02 June 2025 00:42:18 +0000 (0:00:01.290) 0:00:36.895 *********** 2025-06-02 00:52:01.033437 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.033448 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.033459 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.033470 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.033480 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.033490 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.033500 | orchestrator | 2025-06-02 00:52:01.033510 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 00:52:01.033520 | orchestrator | Monday 02 June 2025 00:42:19 +0000 (0:00:00.962) 0:00:37.857 *********** 2025-06-02 00:52:01.033532 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 00:52:01.033544 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-02 00:52:01.033556 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 00:52:01.033567 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-02 00:52:01.033579 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-02 00:52:01.033591 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 00:52:01.033603 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-02 00:52:01.033615 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 00:52:01.033626 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 00:52:01.033638 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-02 00:52:01.033649 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 00:52:01.033661 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 00:52:01.033673 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 00:52:01.033684 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-02 00:52:01.033702 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 00:52:01.033714 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 00:52:01.033726 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 00:52:01.033737 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 00:52:01.033749 | orchestrator | 2025-06-02 00:52:01.033760 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 00:52:01.033778 | orchestrator | Monday 02 June 2025 00:42:22 +0000 (0:00:02.678) 0:00:40.535 *********** 2025-06-02 00:52:01.033790 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 00:52:01.033802 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 00:52:01.033814 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 00:52:01.033825 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.033837 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 00:52:01.033849 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 00:52:01.033861 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 00:52:01.033873 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.033883 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 00:52:01.033892 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 00:52:01.033902 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 00:52:01.033912 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.033972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 00:52:01.033985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 00:52:01.033995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 00:52:01.034043 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.034063 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 00:52:01.034073 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 00:52:01.034083 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 00:52:01.034093 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.034103 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 00:52:01.034112 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 00:52:01.034122 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 00:52:01.034132 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.034142 | orchestrator | 2025-06-02 00:52:01.034152 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 00:52:01.034161 | orchestrator | Monday 02 June 2025 00:42:23 +0000 (0:00:00.829) 0:00:41.365 *********** 2025-06-02 00:52:01.034171 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.034181 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.034191 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.034201 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.034211 | orchestrator | 2025-06-02 00:52:01.034221 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 00:52:01.034232 | orchestrator | Monday 02 June 2025 00:42:24 +0000 (0:00:00.871) 0:00:42.236 *********** 2025-06-02 00:52:01.034241 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.034251 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.034261 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.034271 | orchestrator | 2025-06-02 00:52:01.034281 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 00:52:01.034291 | orchestrator | Monday 02 June 2025 00:42:24 +0000 (0:00:00.268) 0:00:42.505 *********** 2025-06-02 00:52:01.034300 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.034318 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.034328 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.034338 | orchestrator | 2025-06-02 00:52:01.034348 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 00:52:01.034358 | orchestrator | Monday 02 June 2025 00:42:24 +0000 (0:00:00.445) 0:00:42.951 *********** 2025-06-02 00:52:01.034368 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.034378 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.034388 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.034398 | orchestrator | 2025-06-02 00:52:01.034408 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 00:52:01.034417 | orchestrator | Monday 02 June 2025 00:42:25 +0000 (0:00:00.265) 0:00:43.216 *********** 2025-06-02 00:52:01.034427 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.034437 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.034447 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.034457 | orchestrator | 2025-06-02 00:52:01.034467 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 00:52:01.034477 | orchestrator | Monday 02 June 2025 00:42:25 +0000 (0:00:00.341) 0:00:43.558 *********** 2025-06-02 00:52:01.034486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:52:01.034496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:52:01.034506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:52:01.034516 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.034526 | orchestrator | 2025-06-02 00:52:01.034536 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 00:52:01.034545 | orchestrator | Monday 02 June 2025 00:42:25 +0000 (0:00:00.341) 0:00:43.899 *********** 2025-06-02 00:52:01.034555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:52:01.034565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:52:01.034575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:52:01.034584 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.034594 | orchestrator | 2025-06-02 00:52:01.034604 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 00:52:01.034614 | orchestrator | Monday 02 June 2025 00:42:26 +0000 (0:00:00.350) 0:00:44.250 *********** 2025-06-02 00:52:01.034624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:52:01.034633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:52:01.034648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:52:01.034658 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.034668 | orchestrator | 2025-06-02 00:52:01.034678 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 00:52:01.034688 | orchestrator | Monday 02 June 2025 00:42:26 +0000 (0:00:00.579) 0:00:44.830 *********** 2025-06-02 00:52:01.034697 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.034707 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.034717 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.034727 | orchestrator | 2025-06-02 00:52:01.034737 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 00:52:01.034747 | orchestrator | Monday 02 June 2025 00:42:27 +0000 (0:00:00.637) 0:00:45.467 *********** 2025-06-02 00:52:01.034756 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 00:52:01.034766 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 00:52:01.034776 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 00:52:01.034786 | orchestrator | 2025-06-02 00:52:01.034796 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 00:52:01.034806 | orchestrator | Monday 02 June 2025 00:42:28 +0000 (0:00:00.831) 0:00:46.299 *********** 2025-06-02 00:52:01.034849 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 00:52:01.034866 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 00:52:01.034877 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 00:52:01.034887 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 00:52:01.034896 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 00:52:01.034906 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 00:52:01.034916 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 00:52:01.034926 | orchestrator | 2025-06-02 00:52:01.034935 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 00:52:01.034995 | orchestrator | Monday 02 June 2025 00:42:28 +0000 (0:00:00.752) 0:00:47.051 *********** 2025-06-02 00:52:01.035006 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 00:52:01.035016 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 00:52:01.035026 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 00:52:01.035035 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 00:52:01.035045 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 00:52:01.035055 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 00:52:01.035064 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 00:52:01.035074 | orchestrator | 2025-06-02 00:52:01.035084 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 00:52:01.035093 | orchestrator | Monday 02 June 2025 00:42:30 +0000 (0:00:01.923) 0:00:48.975 *********** 2025-06-02 00:52:01.035103 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.035114 | orchestrator | 2025-06-02 00:52:01.035124 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 00:52:01.035133 | orchestrator | Monday 02 June 2025 00:42:32 +0000 (0:00:01.169) 0:00:50.144 *********** 2025-06-02 00:52:01.035143 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.035153 | orchestrator | 2025-06-02 00:52:01.035162 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 00:52:01.035172 | orchestrator | Monday 02 June 2025 00:42:33 +0000 (0:00:01.439) 0:00:51.584 *********** 2025-06-02 00:52:01.035182 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.035192 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.035201 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.035211 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.035221 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.035230 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.035240 | orchestrator | 2025-06-02 00:52:01.035250 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 00:52:01.035260 | orchestrator | Monday 02 June 2025 00:42:34 +0000 (0:00:00.870) 0:00:52.455 *********** 2025-06-02 00:52:01.035269 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.035279 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.035289 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.035299 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.035308 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.035318 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.035328 | orchestrator | 2025-06-02 00:52:01.035338 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 00:52:01.035355 | orchestrator | Monday 02 June 2025 00:42:35 +0000 (0:00:01.595) 0:00:54.050 *********** 2025-06-02 00:52:01.035365 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.035375 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.035385 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.035395 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.035405 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.035413 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.035421 | orchestrator | 2025-06-02 00:52:01.035429 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 00:52:01.035440 | orchestrator | Monday 02 June 2025 00:42:36 +0000 (0:00:01.057) 0:00:55.108 *********** 2025-06-02 00:52:01.035449 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.035457 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.035465 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.035473 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.035480 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.035488 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.035496 | orchestrator | 2025-06-02 00:52:01.035504 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 00:52:01.035513 | orchestrator | Monday 02 June 2025 00:42:37 +0000 (0:00:00.961) 0:00:56.069 *********** 2025-06-02 00:52:01.035520 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.035528 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.035537 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.035545 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.035553 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.035561 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.035569 | orchestrator | 2025-06-02 00:52:01.035577 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 00:52:01.035585 | orchestrator | Monday 02 June 2025 00:42:38 +0000 (0:00:00.930) 0:00:57.000 *********** 2025-06-02 00:52:01.035619 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.035628 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.035636 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.035644 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.035652 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.035660 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.035668 | orchestrator | 2025-06-02 00:52:01.035676 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 00:52:01.035684 | orchestrator | Monday 02 June 2025 00:42:39 +0000 (0:00:00.504) 0:00:57.504 *********** 2025-06-02 00:52:01.035692 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.035700 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.035708 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.035716 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.035724 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.035732 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.035739 | orchestrator | 2025-06-02 00:52:01.035747 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 00:52:01.035755 | orchestrator | Monday 02 June 2025 00:42:40 +0000 (0:00:00.667) 0:00:58.172 *********** 2025-06-02 00:52:01.035763 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.035771 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.035779 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.035787 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.035795 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.035803 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.035812 | orchestrator | 2025-06-02 00:52:01.035820 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 00:52:01.035828 | orchestrator | Monday 02 June 2025 00:42:41 +0000 (0:00:01.126) 0:00:59.299 *********** 2025-06-02 00:52:01.035836 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.035843 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.035851 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.035864 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.035873 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.035880 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.035888 | orchestrator | 2025-06-02 00:52:01.035896 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 00:52:01.035904 | orchestrator | Monday 02 June 2025 00:42:42 +0000 (0:00:01.276) 0:01:00.576 *********** 2025-06-02 00:52:01.035912 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.035920 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.035928 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.035936 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.035960 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.035968 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.035976 | orchestrator | 2025-06-02 00:52:01.035984 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 00:52:01.035992 | orchestrator | Monday 02 June 2025 00:42:43 +0000 (0:00:00.564) 0:01:01.140 *********** 2025-06-02 00:52:01.036000 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.036008 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.036016 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.036024 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.036032 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.036040 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.036047 | orchestrator | 2025-06-02 00:52:01.036055 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 00:52:01.036063 | orchestrator | Monday 02 June 2025 00:42:43 +0000 (0:00:00.918) 0:01:02.059 *********** 2025-06-02 00:52:01.036071 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.036079 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.036087 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.036095 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.036103 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.036111 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.036119 | orchestrator | 2025-06-02 00:52:01.036127 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 00:52:01.036135 | orchestrator | Monday 02 June 2025 00:42:44 +0000 (0:00:00.645) 0:01:02.704 *********** 2025-06-02 00:52:01.036143 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.036151 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.036159 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.036167 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.036175 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.036183 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.036190 | orchestrator | 2025-06-02 00:52:01.036198 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 00:52:01.036206 | orchestrator | Monday 02 June 2025 00:42:45 +0000 (0:00:00.715) 0:01:03.420 *********** 2025-06-02 00:52:01.036214 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.036222 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.036230 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.036238 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.036246 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.036254 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.036262 | orchestrator | 2025-06-02 00:52:01.036270 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 00:52:01.036282 | orchestrator | Monday 02 June 2025 00:42:45 +0000 (0:00:00.589) 0:01:04.010 *********** 2025-06-02 00:52:01.036290 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.036298 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.036306 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.036314 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.036322 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.036330 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.036337 | orchestrator | 2025-06-02 00:52:01.036345 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 00:52:01.036358 | orchestrator | Monday 02 June 2025 00:42:46 +0000 (0:00:00.692) 0:01:04.702 *********** 2025-06-02 00:52:01.036366 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.036374 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.036382 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.036389 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.036397 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.036405 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.036413 | orchestrator | 2025-06-02 00:52:01.036421 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 00:52:01.036452 | orchestrator | Monday 02 June 2025 00:42:47 +0000 (0:00:00.513) 0:01:05.216 *********** 2025-06-02 00:52:01.036461 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.036469 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.036477 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.036485 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.036493 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.036501 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.036509 | orchestrator | 2025-06-02 00:52:01.036517 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 00:52:01.036525 | orchestrator | Monday 02 June 2025 00:42:47 +0000 (0:00:00.693) 0:01:05.910 *********** 2025-06-02 00:52:01.036533 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.036541 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.036550 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.036558 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.036566 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.036574 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.036582 | orchestrator | 2025-06-02 00:52:01.036590 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 00:52:01.036598 | orchestrator | Monday 02 June 2025 00:42:48 +0000 (0:00:00.545) 0:01:06.455 *********** 2025-06-02 00:52:01.036606 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.036614 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.036622 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.036630 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.036638 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.036646 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.036654 | orchestrator | 2025-06-02 00:52:01.036662 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-02 00:52:01.036670 | orchestrator | Monday 02 June 2025 00:42:49 +0000 (0:00:01.099) 0:01:07.554 *********** 2025-06-02 00:52:01.036677 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.036685 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.036694 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.036702 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.036710 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.036717 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.036725 | orchestrator | 2025-06-02 00:52:01.036734 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-02 00:52:01.036742 | orchestrator | Monday 02 June 2025 00:42:50 +0000 (0:00:01.522) 0:01:09.077 *********** 2025-06-02 00:52:01.036750 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.036758 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.036766 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.036774 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.036782 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.036789 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.036797 | orchestrator | 2025-06-02 00:52:01.036806 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-02 00:52:01.036813 | orchestrator | Monday 02 June 2025 00:42:52 +0000 (0:00:01.765) 0:01:10.842 *********** 2025-06-02 00:52:01.036822 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.036834 | orchestrator | 2025-06-02 00:52:01.036843 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-02 00:52:01.036851 | orchestrator | Monday 02 June 2025 00:42:53 +0000 (0:00:01.094) 0:01:11.937 *********** 2025-06-02 00:52:01.036859 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.036867 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.036875 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.036883 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.036891 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.036899 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.036907 | orchestrator | 2025-06-02 00:52:01.036915 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-02 00:52:01.036923 | orchestrator | Monday 02 June 2025 00:42:54 +0000 (0:00:00.742) 0:01:12.679 *********** 2025-06-02 00:52:01.036931 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.036954 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.036964 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.036972 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.036979 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.036987 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.036995 | orchestrator | 2025-06-02 00:52:01.037003 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-02 00:52:01.037012 | orchestrator | Monday 02 June 2025 00:42:55 +0000 (0:00:00.586) 0:01:13.266 *********** 2025-06-02 00:52:01.037020 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 00:52:01.037028 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 00:52:01.037036 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 00:52:01.037052 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 00:52:01.037066 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 00:52:01.037079 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 00:52:01.037091 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 00:52:01.037104 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 00:52:01.037118 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 00:52:01.037128 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 00:52:01.037136 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 00:52:01.037144 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 00:52:01.037151 | orchestrator | 2025-06-02 00:52:01.037185 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-02 00:52:01.037195 | orchestrator | Monday 02 June 2025 00:42:56 +0000 (0:00:01.500) 0:01:14.766 *********** 2025-06-02 00:52:01.037213 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.037222 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.037230 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.037238 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.037246 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.037254 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.037262 | orchestrator | 2025-06-02 00:52:01.037270 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-02 00:52:01.037278 | orchestrator | Monday 02 June 2025 00:42:57 +0000 (0:00:00.921) 0:01:15.687 *********** 2025-06-02 00:52:01.037286 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.037293 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.037310 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.037318 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.037326 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.037334 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.037342 | orchestrator | 2025-06-02 00:52:01.037350 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-02 00:52:01.037357 | orchestrator | Monday 02 June 2025 00:42:58 +0000 (0:00:00.774) 0:01:16.462 *********** 2025-06-02 00:52:01.037365 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.037373 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.037381 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.037389 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.037397 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.037405 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.037413 | orchestrator | 2025-06-02 00:52:01.037421 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-02 00:52:01.037429 | orchestrator | Monday 02 June 2025 00:42:58 +0000 (0:00:00.574) 0:01:17.037 *********** 2025-06-02 00:52:01.037436 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.037444 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.037452 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.037460 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.037468 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.037476 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.037484 | orchestrator | 2025-06-02 00:52:01.037492 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-02 00:52:01.037500 | orchestrator | Monday 02 June 2025 00:42:59 +0000 (0:00:00.741) 0:01:17.778 *********** 2025-06-02 00:52:01.037508 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.037516 | orchestrator | 2025-06-02 00:52:01.037524 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-02 00:52:01.037532 | orchestrator | Monday 02 June 2025 00:43:00 +0000 (0:00:01.112) 0:01:18.891 *********** 2025-06-02 00:52:01.037540 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.037548 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.037556 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.037564 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.037572 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.037580 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.037588 | orchestrator | 2025-06-02 00:52:01.037595 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-02 00:52:01.037603 | orchestrator | Monday 02 June 2025 00:43:57 +0000 (0:00:57.006) 0:02:15.898 *********** 2025-06-02 00:52:01.037611 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 00:52:01.037619 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 00:52:01.037627 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 00:52:01.037635 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.037643 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 00:52:01.037651 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 00:52:01.037659 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 00:52:01.037667 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.037675 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 00:52:01.037683 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 00:52:01.037691 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 00:52:01.037698 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.037715 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 00:52:01.037723 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 00:52:01.037731 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 00:52:01.037739 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.037747 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 00:52:01.037755 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 00:52:01.037763 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 00:52:01.037771 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.037779 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 00:52:01.037787 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 00:52:01.037795 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 00:52:01.037824 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.037834 | orchestrator | 2025-06-02 00:52:01.037842 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-02 00:52:01.037850 | orchestrator | Monday 02 June 2025 00:43:58 +0000 (0:00:00.829) 0:02:16.728 *********** 2025-06-02 00:52:01.037858 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.037866 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.037874 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.037882 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.037890 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.037898 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.037906 | orchestrator | 2025-06-02 00:52:01.037914 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-02 00:52:01.037922 | orchestrator | Monday 02 June 2025 00:43:59 +0000 (0:00:00.542) 0:02:17.271 *********** 2025-06-02 00:52:01.037930 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.037976 | orchestrator | 2025-06-02 00:52:01.037987 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-02 00:52:01.037995 | orchestrator | Monday 02 June 2025 00:43:59 +0000 (0:00:00.141) 0:02:17.413 *********** 2025-06-02 00:52:01.038003 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.038011 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.038040 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.038048 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.038057 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.038065 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.038072 | orchestrator | 2025-06-02 00:52:01.038080 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-02 00:52:01.038088 | orchestrator | Monday 02 June 2025 00:44:00 +0000 (0:00:00.725) 0:02:18.138 *********** 2025-06-02 00:52:01.038096 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.038104 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.038112 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.038120 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.038128 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.038136 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.038144 | orchestrator | 2025-06-02 00:52:01.038152 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-02 00:52:01.038160 | orchestrator | Monday 02 June 2025 00:44:00 +0000 (0:00:00.528) 0:02:18.666 *********** 2025-06-02 00:52:01.038168 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.038176 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.038184 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.038191 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.038199 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.038213 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.038222 | orchestrator | 2025-06-02 00:52:01.038230 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-02 00:52:01.038238 | orchestrator | Monday 02 June 2025 00:44:01 +0000 (0:00:00.835) 0:02:19.502 *********** 2025-06-02 00:52:01.038246 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.038254 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.038261 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.038269 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.038278 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.038285 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.038293 | orchestrator | 2025-06-02 00:52:01.038301 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-02 00:52:01.038309 | orchestrator | Monday 02 June 2025 00:44:03 +0000 (0:00:02.020) 0:02:21.523 *********** 2025-06-02 00:52:01.038317 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.038325 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.038333 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.038341 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.038349 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.038357 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.038365 | orchestrator | 2025-06-02 00:52:01.038373 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-02 00:52:01.038381 | orchestrator | Monday 02 June 2025 00:44:04 +0000 (0:00:00.893) 0:02:22.416 *********** 2025-06-02 00:52:01.038389 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.038397 | orchestrator | 2025-06-02 00:52:01.038404 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-02 00:52:01.038411 | orchestrator | Monday 02 June 2025 00:44:05 +0000 (0:00:01.169) 0:02:23.586 *********** 2025-06-02 00:52:01.038417 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.038424 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.038431 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.038438 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.038445 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.038451 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.038458 | orchestrator | 2025-06-02 00:52:01.038465 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-02 00:52:01.038475 | orchestrator | Monday 02 June 2025 00:44:06 +0000 (0:00:00.622) 0:02:24.208 *********** 2025-06-02 00:52:01.038482 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.038489 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.038495 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.038502 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.038509 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.038515 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.038522 | orchestrator | 2025-06-02 00:52:01.038529 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-02 00:52:01.038536 | orchestrator | Monday 02 June 2025 00:44:06 +0000 (0:00:00.806) 0:02:25.014 *********** 2025-06-02 00:52:01.038542 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.038549 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.038556 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.038562 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.038569 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.038576 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.038582 | orchestrator | 2025-06-02 00:52:01.038589 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-02 00:52:01.038619 | orchestrator | Monday 02 June 2025 00:44:07 +0000 (0:00:00.523) 0:02:25.538 *********** 2025-06-02 00:52:01.038627 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.038634 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.038648 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.038655 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.038661 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.038668 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.038675 | orchestrator | 2025-06-02 00:52:01.038682 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-02 00:52:01.038689 | orchestrator | Monday 02 June 2025 00:44:08 +0000 (0:00:00.659) 0:02:26.198 *********** 2025-06-02 00:52:01.038695 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.038702 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.038709 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.038716 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.038722 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.038729 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.038736 | orchestrator | 2025-06-02 00:52:01.038743 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-02 00:52:01.038750 | orchestrator | Monday 02 June 2025 00:44:08 +0000 (0:00:00.588) 0:02:26.787 *********** 2025-06-02 00:52:01.038756 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.038763 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.038770 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.038777 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.038783 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.038790 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.038797 | orchestrator | 2025-06-02 00:52:01.038804 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-02 00:52:01.038810 | orchestrator | Monday 02 June 2025 00:44:09 +0000 (0:00:00.702) 0:02:27.489 *********** 2025-06-02 00:52:01.038817 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.038824 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.038830 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.038837 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.038844 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.038851 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.038857 | orchestrator | 2025-06-02 00:52:01.038864 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-02 00:52:01.038871 | orchestrator | Monday 02 June 2025 00:44:09 +0000 (0:00:00.586) 0:02:28.076 *********** 2025-06-02 00:52:01.038877 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.038884 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.038891 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.038898 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.038904 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.038911 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.038918 | orchestrator | 2025-06-02 00:52:01.038924 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-02 00:52:01.038931 | orchestrator | Monday 02 June 2025 00:44:10 +0000 (0:00:00.979) 0:02:29.056 *********** 2025-06-02 00:52:01.038952 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.038963 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.038970 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.038977 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.038984 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.038990 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.038997 | orchestrator | 2025-06-02 00:52:01.039004 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-02 00:52:01.039011 | orchestrator | Monday 02 June 2025 00:44:12 +0000 (0:00:01.148) 0:02:30.205 *********** 2025-06-02 00:52:01.039017 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.039024 | orchestrator | 2025-06-02 00:52:01.039031 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-02 00:52:01.039042 | orchestrator | Monday 02 June 2025 00:44:13 +0000 (0:00:01.075) 0:02:31.280 *********** 2025-06-02 00:52:01.039048 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-02 00:52:01.039055 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-02 00:52:01.039062 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-02 00:52:01.039069 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-02 00:52:01.039075 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-02 00:52:01.039082 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-02 00:52:01.039089 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-02 00:52:01.039096 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-02 00:52:01.039102 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-02 00:52:01.039112 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-02 00:52:01.039119 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-02 00:52:01.039126 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-02 00:52:01.039132 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-02 00:52:01.039139 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-02 00:52:01.039145 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-02 00:52:01.039152 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-02 00:52:01.039159 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-02 00:52:01.039165 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-02 00:52:01.039172 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-02 00:52:01.039179 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-02 00:52:01.039185 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-02 00:52:01.039212 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-02 00:52:01.039220 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-02 00:52:01.039227 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-02 00:52:01.039233 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-02 00:52:01.039240 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-02 00:52:01.039246 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-02 00:52:01.039253 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-02 00:52:01.039260 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-02 00:52:01.039266 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-02 00:52:01.039273 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-02 00:52:01.039279 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-02 00:52:01.039286 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-02 00:52:01.039292 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-02 00:52:01.039299 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-02 00:52:01.039305 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-02 00:52:01.039312 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-02 00:52:01.039318 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-02 00:52:01.039325 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-02 00:52:01.039332 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-02 00:52:01.039338 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-02 00:52:01.039345 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-02 00:52:01.039352 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-02 00:52:01.039362 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-02 00:52:01.039369 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-02 00:52:01.039376 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-02 00:52:01.039382 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 00:52:01.039389 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 00:52:01.039395 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 00:52:01.039402 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 00:52:01.039409 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-02 00:52:01.039415 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 00:52:01.039422 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-02 00:52:01.039429 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 00:52:01.039435 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 00:52:01.039442 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 00:52:01.039448 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 00:52:01.039455 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 00:52:01.039462 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 00:52:01.039468 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 00:52:01.039475 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 00:52:01.039482 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 00:52:01.039488 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 00:52:01.039495 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 00:52:01.039501 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 00:52:01.039508 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 00:52:01.039515 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 00:52:01.039522 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 00:52:01.039528 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 00:52:01.039538 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 00:52:01.039545 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 00:52:01.039552 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 00:52:01.039558 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 00:52:01.039565 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 00:52:01.039571 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 00:52:01.039578 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 00:52:01.039585 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 00:52:01.039591 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 00:52:01.039598 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 00:52:01.039621 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-02 00:52:01.039629 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 00:52:01.039636 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 00:52:01.039643 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 00:52:01.039653 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-02 00:52:01.039660 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-02 00:52:01.039667 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-02 00:52:01.039674 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 00:52:01.039681 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 00:52:01.039687 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-02 00:52:01.039694 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-02 00:52:01.039701 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-02 00:52:01.039707 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-02 00:52:01.039714 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-02 00:52:01.039721 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-02 00:52:01.039727 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-02 00:52:01.039734 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-02 00:52:01.039741 | orchestrator | 2025-06-02 00:52:01.039747 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-02 00:52:01.039754 | orchestrator | Monday 02 June 2025 00:44:19 +0000 (0:00:06.366) 0:02:37.647 *********** 2025-06-02 00:52:01.039761 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.039768 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.039774 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.039781 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.039788 | orchestrator | 2025-06-02 00:52:01.039795 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-02 00:52:01.039801 | orchestrator | Monday 02 June 2025 00:44:20 +0000 (0:00:01.073) 0:02:38.721 *********** 2025-06-02 00:52:01.039808 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.039816 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.039822 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.039829 | orchestrator | 2025-06-02 00:52:01.039836 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-02 00:52:01.039843 | orchestrator | Monday 02 June 2025 00:44:21 +0000 (0:00:00.770) 0:02:39.492 *********** 2025-06-02 00:52:01.039849 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.039856 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.039863 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.039869 | orchestrator | 2025-06-02 00:52:01.039876 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-02 00:52:01.039883 | orchestrator | Monday 02 June 2025 00:44:22 +0000 (0:00:01.373) 0:02:40.866 *********** 2025-06-02 00:52:01.039890 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.039896 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.039903 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.039910 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.039917 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.039923 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.039930 | orchestrator | 2025-06-02 00:52:01.039952 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-02 00:52:01.039964 | orchestrator | Monday 02 June 2025 00:44:23 +0000 (0:00:00.688) 0:02:41.554 *********** 2025-06-02 00:52:01.039971 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.039978 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.039988 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.039995 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.040002 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.040008 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.040015 | orchestrator | 2025-06-02 00:52:01.040022 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-02 00:52:01.040028 | orchestrator | Monday 02 June 2025 00:44:24 +0000 (0:00:00.853) 0:02:42.407 *********** 2025-06-02 00:52:01.040035 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040042 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040049 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040055 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.040062 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.040069 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.040076 | orchestrator | 2025-06-02 00:52:01.040082 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-02 00:52:01.040089 | orchestrator | Monday 02 June 2025 00:44:24 +0000 (0:00:00.556) 0:02:42.964 *********** 2025-06-02 00:52:01.040096 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040103 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040129 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040137 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.040144 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.040150 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.040157 | orchestrator | 2025-06-02 00:52:01.040164 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-02 00:52:01.040170 | orchestrator | Monday 02 June 2025 00:44:25 +0000 (0:00:00.657) 0:02:43.621 *********** 2025-06-02 00:52:01.040177 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040184 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040190 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040197 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.040203 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.040210 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.040217 | orchestrator | 2025-06-02 00:52:01.040223 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-02 00:52:01.040230 | orchestrator | Monday 02 June 2025 00:44:26 +0000 (0:00:00.694) 0:02:44.316 *********** 2025-06-02 00:52:01.040237 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040243 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040250 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040257 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.040263 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.040270 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.040277 | orchestrator | 2025-06-02 00:52:01.040284 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-02 00:52:01.040290 | orchestrator | Monday 02 June 2025 00:44:26 +0000 (0:00:00.654) 0:02:44.970 *********** 2025-06-02 00:52:01.040297 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040304 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040310 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040317 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.040324 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.040330 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.040337 | orchestrator | 2025-06-02 00:52:01.040344 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-02 00:52:01.040350 | orchestrator | Monday 02 June 2025 00:44:27 +0000 (0:00:00.652) 0:02:45.623 *********** 2025-06-02 00:52:01.040361 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040368 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040375 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040381 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.040388 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.040395 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.040401 | orchestrator | 2025-06-02 00:52:01.040408 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-02 00:52:01.040415 | orchestrator | Monday 02 June 2025 00:44:28 +0000 (0:00:00.710) 0:02:46.333 *********** 2025-06-02 00:52:01.040421 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040428 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040435 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040441 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.040448 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.040455 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.040461 | orchestrator | 2025-06-02 00:52:01.040468 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-02 00:52:01.040475 | orchestrator | Monday 02 June 2025 00:44:30 +0000 (0:00:02.781) 0:02:49.114 *********** 2025-06-02 00:52:01.040481 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040488 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040495 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040501 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.040508 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.040515 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.040521 | orchestrator | 2025-06-02 00:52:01.040528 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-02 00:52:01.040535 | orchestrator | Monday 02 June 2025 00:44:31 +0000 (0:00:00.697) 0:02:49.812 *********** 2025-06-02 00:52:01.040542 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040548 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040555 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040562 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.040568 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.040575 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.040582 | orchestrator | 2025-06-02 00:52:01.040588 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-02 00:52:01.040595 | orchestrator | Monday 02 June 2025 00:44:32 +0000 (0:00:00.637) 0:02:50.449 *********** 2025-06-02 00:52:01.040602 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040609 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040615 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040622 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.040629 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.040635 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.040642 | orchestrator | 2025-06-02 00:52:01.040654 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-02 00:52:01.040661 | orchestrator | Monday 02 June 2025 00:44:32 +0000 (0:00:00.652) 0:02:51.102 *********** 2025-06-02 00:52:01.040668 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040675 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040681 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040688 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.040695 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.040702 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.040709 | orchestrator | 2025-06-02 00:52:01.040715 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-02 00:52:01.040743 | orchestrator | Monday 02 June 2025 00:44:33 +0000 (0:00:00.501) 0:02:51.603 *********** 2025-06-02 00:52:01.040751 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040758 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040765 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040772 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-02 00:52:01.040780 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-02 00:52:01.040788 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.040794 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-02 00:52:01.040801 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-02 00:52:01.040808 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.040815 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-02 00:52:01.040822 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-02 00:52:01.040829 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.040835 | orchestrator | 2025-06-02 00:52:01.040842 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-02 00:52:01.040849 | orchestrator | Monday 02 June 2025 00:44:34 +0000 (0:00:00.740) 0:02:52.343 *********** 2025-06-02 00:52:01.040855 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040862 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040869 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040875 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.040882 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.040889 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.040895 | orchestrator | 2025-06-02 00:52:01.040902 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-02 00:52:01.040908 | orchestrator | Monday 02 June 2025 00:44:34 +0000 (0:00:00.579) 0:02:52.923 *********** 2025-06-02 00:52:01.040915 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.040922 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.040928 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.040935 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.040977 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.040984 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.040991 | orchestrator | 2025-06-02 00:52:01.040998 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 00:52:01.041009 | orchestrator | Monday 02 June 2025 00:44:35 +0000 (0:00:00.677) 0:02:53.601 *********** 2025-06-02 00:52:01.041016 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.041023 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.041029 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.041039 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.041046 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.041053 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.041060 | orchestrator | 2025-06-02 00:52:01.041067 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 00:52:01.041074 | orchestrator | Monday 02 June 2025 00:44:36 +0000 (0:00:00.538) 0:02:54.140 *********** 2025-06-02 00:52:01.041080 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.041087 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.041094 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.041100 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.041107 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.041114 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.041121 | orchestrator | 2025-06-02 00:52:01.041127 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 00:52:01.041134 | orchestrator | Monday 02 June 2025 00:44:36 +0000 (0:00:00.638) 0:02:54.778 *********** 2025-06-02 00:52:01.041141 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.041148 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.041155 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.041161 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.041188 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.041197 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.041203 | orchestrator | 2025-06-02 00:52:01.041210 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 00:52:01.041217 | orchestrator | Monday 02 June 2025 00:44:37 +0000 (0:00:00.521) 0:02:55.299 *********** 2025-06-02 00:52:01.041224 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.041230 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.041237 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.041244 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.041251 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.041257 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.041264 | orchestrator | 2025-06-02 00:52:01.041271 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 00:52:01.041277 | orchestrator | Monday 02 June 2025 00:44:37 +0000 (0:00:00.748) 0:02:56.048 *********** 2025-06-02 00:52:01.041284 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 00:52:01.041291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 00:52:01.041297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 00:52:01.041304 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.041310 | orchestrator | 2025-06-02 00:52:01.041317 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 00:52:01.041323 | orchestrator | Monday 02 June 2025 00:44:38 +0000 (0:00:00.353) 0:02:56.401 *********** 2025-06-02 00:52:01.041329 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 00:52:01.041335 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 00:52:01.041341 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 00:52:01.041347 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.041353 | orchestrator | 2025-06-02 00:52:01.041360 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 00:52:01.041366 | orchestrator | Monday 02 June 2025 00:44:38 +0000 (0:00:00.375) 0:02:56.777 *********** 2025-06-02 00:52:01.041372 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 00:52:01.041378 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 00:52:01.041384 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 00:52:01.041394 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.041401 | orchestrator | 2025-06-02 00:52:01.041407 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 00:52:01.041413 | orchestrator | Monday 02 June 2025 00:44:39 +0000 (0:00:00.411) 0:02:57.188 *********** 2025-06-02 00:52:01.041419 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.041426 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.041432 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.041438 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.041444 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.041451 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.041457 | orchestrator | 2025-06-02 00:52:01.041463 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 00:52:01.041469 | orchestrator | Monday 02 June 2025 00:44:39 +0000 (0:00:00.658) 0:02:57.847 *********** 2025-06-02 00:52:01.041475 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-02 00:52:01.041482 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.041488 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-02 00:52:01.041494 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.041500 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-02 00:52:01.041507 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.041513 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 00:52:01.041519 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 00:52:01.041526 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 00:52:01.041532 | orchestrator | 2025-06-02 00:52:01.041538 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-02 00:52:01.041544 | orchestrator | Monday 02 June 2025 00:44:41 +0000 (0:00:02.053) 0:02:59.901 *********** 2025-06-02 00:52:01.041551 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.041557 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.041563 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.041569 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.041575 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.041581 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.041588 | orchestrator | 2025-06-02 00:52:01.041594 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 00:52:01.041600 | orchestrator | Monday 02 June 2025 00:44:44 +0000 (0:00:02.745) 0:03:02.647 *********** 2025-06-02 00:52:01.041607 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.041613 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.041619 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.041625 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.041631 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.041640 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.041647 | orchestrator | 2025-06-02 00:52:01.041653 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 00:52:01.041659 | orchestrator | Monday 02 June 2025 00:44:45 +0000 (0:00:00.961) 0:03:03.608 *********** 2025-06-02 00:52:01.041665 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.041672 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.041678 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.041684 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.041690 | orchestrator | 2025-06-02 00:52:01.041697 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 00:52:01.041703 | orchestrator | Monday 02 June 2025 00:44:46 +0000 (0:00:00.876) 0:03:04.484 *********** 2025-06-02 00:52:01.041709 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.041715 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.041722 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.041728 | orchestrator | 2025-06-02 00:52:01.041734 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 00:52:01.041761 | orchestrator | Monday 02 June 2025 00:44:46 +0000 (0:00:00.284) 0:03:04.769 *********** 2025-06-02 00:52:01.041768 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.041775 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.041781 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.041787 | orchestrator | 2025-06-02 00:52:01.041793 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 00:52:01.041800 | orchestrator | Monday 02 June 2025 00:44:48 +0000 (0:00:01.420) 0:03:06.189 *********** 2025-06-02 00:52:01.041806 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 00:52:01.041812 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 00:52:01.041818 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 00:52:01.041824 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.041831 | orchestrator | 2025-06-02 00:52:01.041837 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 00:52:01.041843 | orchestrator | Monday 02 June 2025 00:44:48 +0000 (0:00:00.474) 0:03:06.663 *********** 2025-06-02 00:52:01.041849 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.041855 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.041862 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.041868 | orchestrator | 2025-06-02 00:52:01.041874 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 00:52:01.041881 | orchestrator | Monday 02 June 2025 00:44:48 +0000 (0:00:00.320) 0:03:06.984 *********** 2025-06-02 00:52:01.041887 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.041893 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.041899 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.041906 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.041912 | orchestrator | 2025-06-02 00:52:01.041918 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 00:52:01.041924 | orchestrator | Monday 02 June 2025 00:44:49 +0000 (0:00:00.864) 0:03:07.848 *********** 2025-06-02 00:52:01.041931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:52:01.041948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:52:01.041956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:52:01.041962 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.041968 | orchestrator | 2025-06-02 00:52:01.041975 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 00:52:01.041981 | orchestrator | Monday 02 June 2025 00:44:50 +0000 (0:00:00.318) 0:03:08.167 *********** 2025-06-02 00:52:01.041987 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.041993 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.041999 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.042005 | orchestrator | 2025-06-02 00:52:01.042012 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 00:52:01.042033 | orchestrator | Monday 02 June 2025 00:44:50 +0000 (0:00:00.272) 0:03:08.440 *********** 2025-06-02 00:52:01.042040 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042047 | orchestrator | 2025-06-02 00:52:01.042053 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 00:52:01.042059 | orchestrator | Monday 02 June 2025 00:44:50 +0000 (0:00:00.187) 0:03:08.627 *********** 2025-06-02 00:52:01.042065 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042072 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.042078 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.042084 | orchestrator | 2025-06-02 00:52:01.042090 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 00:52:01.042097 | orchestrator | Monday 02 June 2025 00:44:50 +0000 (0:00:00.259) 0:03:08.887 *********** 2025-06-02 00:52:01.042103 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042113 | orchestrator | 2025-06-02 00:52:01.042120 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 00:52:01.042126 | orchestrator | Monday 02 June 2025 00:44:50 +0000 (0:00:00.184) 0:03:09.072 *********** 2025-06-02 00:52:01.042132 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042139 | orchestrator | 2025-06-02 00:52:01.042145 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 00:52:01.042151 | orchestrator | Monday 02 June 2025 00:44:51 +0000 (0:00:00.193) 0:03:09.265 *********** 2025-06-02 00:52:01.042157 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042163 | orchestrator | 2025-06-02 00:52:01.042170 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 00:52:01.042176 | orchestrator | Monday 02 June 2025 00:44:51 +0000 (0:00:00.231) 0:03:09.497 *********** 2025-06-02 00:52:01.042182 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042188 | orchestrator | 2025-06-02 00:52:01.042194 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 00:52:01.042203 | orchestrator | Monday 02 June 2025 00:44:51 +0000 (0:00:00.185) 0:03:09.683 *********** 2025-06-02 00:52:01.042210 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042216 | orchestrator | 2025-06-02 00:52:01.042222 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 00:52:01.042228 | orchestrator | Monday 02 June 2025 00:44:51 +0000 (0:00:00.179) 0:03:09.862 *********** 2025-06-02 00:52:01.042235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:52:01.042241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:52:01.042247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:52:01.042253 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042260 | orchestrator | 2025-06-02 00:52:01.042266 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 00:52:01.042272 | orchestrator | Monday 02 June 2025 00:44:52 +0000 (0:00:00.334) 0:03:10.197 *********** 2025-06-02 00:52:01.042278 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042284 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.042291 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.042297 | orchestrator | 2025-06-02 00:52:01.042323 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 00:52:01.042331 | orchestrator | Monday 02 June 2025 00:44:52 +0000 (0:00:00.313) 0:03:10.511 *********** 2025-06-02 00:52:01.042337 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042343 | orchestrator | 2025-06-02 00:52:01.042349 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 00:52:01.042355 | orchestrator | Monday 02 June 2025 00:44:52 +0000 (0:00:00.191) 0:03:10.702 *********** 2025-06-02 00:52:01.042362 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042368 | orchestrator | 2025-06-02 00:52:01.042374 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 00:52:01.042380 | orchestrator | Monday 02 June 2025 00:44:52 +0000 (0:00:00.187) 0:03:10.890 *********** 2025-06-02 00:52:01.042386 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.042392 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.042398 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.042405 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.042411 | orchestrator | 2025-06-02 00:52:01.042417 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 00:52:01.042423 | orchestrator | Monday 02 June 2025 00:44:53 +0000 (0:00:00.834) 0:03:11.724 *********** 2025-06-02 00:52:01.042429 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.042436 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.042442 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.042448 | orchestrator | 2025-06-02 00:52:01.042455 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 00:52:01.042465 | orchestrator | Monday 02 June 2025 00:44:53 +0000 (0:00:00.272) 0:03:11.997 *********** 2025-06-02 00:52:01.042471 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.042477 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.042483 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.042490 | orchestrator | 2025-06-02 00:52:01.042496 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 00:52:01.042502 | orchestrator | Monday 02 June 2025 00:44:55 +0000 (0:00:01.153) 0:03:13.150 *********** 2025-06-02 00:52:01.042508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:52:01.042515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:52:01.042521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:52:01.042527 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042533 | orchestrator | 2025-06-02 00:52:01.042539 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 00:52:01.042546 | orchestrator | Monday 02 June 2025 00:44:56 +0000 (0:00:01.019) 0:03:14.170 *********** 2025-06-02 00:52:01.042552 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.042558 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.042564 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.042570 | orchestrator | 2025-06-02 00:52:01.042577 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 00:52:01.042583 | orchestrator | Monday 02 June 2025 00:44:56 +0000 (0:00:00.322) 0:03:14.493 *********** 2025-06-02 00:52:01.042589 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.042595 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.042601 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.042608 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.042614 | orchestrator | 2025-06-02 00:52:01.042620 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 00:52:01.042626 | orchestrator | Monday 02 June 2025 00:44:57 +0000 (0:00:00.957) 0:03:15.451 *********** 2025-06-02 00:52:01.042633 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.042639 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.042645 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.042651 | orchestrator | 2025-06-02 00:52:01.042657 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 00:52:01.042664 | orchestrator | Monday 02 June 2025 00:44:57 +0000 (0:00:00.321) 0:03:15.772 *********** 2025-06-02 00:52:01.042670 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.042676 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.042682 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.042688 | orchestrator | 2025-06-02 00:52:01.042695 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 00:52:01.042701 | orchestrator | Monday 02 June 2025 00:44:58 +0000 (0:00:01.228) 0:03:17.001 *********** 2025-06-02 00:52:01.042707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:52:01.042713 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:52:01.042720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:52:01.042726 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042732 | orchestrator | 2025-06-02 00:52:01.042741 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 00:52:01.042747 | orchestrator | Monday 02 June 2025 00:44:59 +0000 (0:00:00.847) 0:03:17.848 *********** 2025-06-02 00:52:01.042754 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.042760 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.042766 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.042772 | orchestrator | 2025-06-02 00:52:01.042779 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-02 00:52:01.042785 | orchestrator | Monday 02 June 2025 00:45:00 +0000 (0:00:00.326) 0:03:18.174 *********** 2025-06-02 00:52:01.042795 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.042801 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.042807 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.042813 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042820 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.042826 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.042832 | orchestrator | 2025-06-02 00:52:01.042838 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 00:52:01.042844 | orchestrator | Monday 02 June 2025 00:45:00 +0000 (0:00:00.923) 0:03:19.098 *********** 2025-06-02 00:52:01.042867 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.042874 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.042880 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.042887 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.042893 | orchestrator | 2025-06-02 00:52:01.042899 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 00:52:01.042905 | orchestrator | Monday 02 June 2025 00:45:02 +0000 (0:00:01.037) 0:03:20.135 *********** 2025-06-02 00:52:01.042912 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.042918 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.042924 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.042930 | orchestrator | 2025-06-02 00:52:01.042949 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 00:52:01.042962 | orchestrator | Monday 02 June 2025 00:45:02 +0000 (0:00:00.338) 0:03:20.474 *********** 2025-06-02 00:52:01.042973 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.042983 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.042990 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.042996 | orchestrator | 2025-06-02 00:52:01.043002 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 00:52:01.043009 | orchestrator | Monday 02 June 2025 00:45:03 +0000 (0:00:01.133) 0:03:21.608 *********** 2025-06-02 00:52:01.043015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 00:52:01.043021 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 00:52:01.043027 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 00:52:01.043033 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043040 | orchestrator | 2025-06-02 00:52:01.043046 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 00:52:01.043052 | orchestrator | Monday 02 June 2025 00:45:04 +0000 (0:00:00.675) 0:03:22.283 *********** 2025-06-02 00:52:01.043058 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.043064 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.043071 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.043077 | orchestrator | 2025-06-02 00:52:01.043083 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-02 00:52:01.043089 | orchestrator | 2025-06-02 00:52:01.043095 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 00:52:01.043102 | orchestrator | Monday 02 June 2025 00:45:04 +0000 (0:00:00.663) 0:03:22.946 *********** 2025-06-02 00:52:01.043108 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.043114 | orchestrator | 2025-06-02 00:52:01.043121 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 00:52:01.043127 | orchestrator | Monday 02 June 2025 00:45:05 +0000 (0:00:00.429) 0:03:23.376 *********** 2025-06-02 00:52:01.043133 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.043139 | orchestrator | 2025-06-02 00:52:01.043145 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 00:52:01.043156 | orchestrator | Monday 02 June 2025 00:45:05 +0000 (0:00:00.546) 0:03:23.922 *********** 2025-06-02 00:52:01.043162 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.043168 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.043175 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.043181 | orchestrator | 2025-06-02 00:52:01.043187 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 00:52:01.043193 | orchestrator | Monday 02 June 2025 00:45:06 +0000 (0:00:00.662) 0:03:24.585 *********** 2025-06-02 00:52:01.043199 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043206 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.043212 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.043218 | orchestrator | 2025-06-02 00:52:01.043224 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 00:52:01.043230 | orchestrator | Monday 02 June 2025 00:45:06 +0000 (0:00:00.245) 0:03:24.831 *********** 2025-06-02 00:52:01.043237 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043243 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.043249 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.043255 | orchestrator | 2025-06-02 00:52:01.043261 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 00:52:01.043268 | orchestrator | Monday 02 June 2025 00:45:06 +0000 (0:00:00.244) 0:03:25.075 *********** 2025-06-02 00:52:01.043274 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043280 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.043286 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.043292 | orchestrator | 2025-06-02 00:52:01.043299 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 00:52:01.043308 | orchestrator | Monday 02 June 2025 00:45:07 +0000 (0:00:00.407) 0:03:25.483 *********** 2025-06-02 00:52:01.043314 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.043320 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.043326 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.043333 | orchestrator | 2025-06-02 00:52:01.043339 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 00:52:01.043345 | orchestrator | Monday 02 June 2025 00:45:08 +0000 (0:00:00.640) 0:03:26.123 *********** 2025-06-02 00:52:01.043351 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043358 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.043364 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.043370 | orchestrator | 2025-06-02 00:52:01.043376 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 00:52:01.043383 | orchestrator | Monday 02 June 2025 00:45:08 +0000 (0:00:00.301) 0:03:26.424 *********** 2025-06-02 00:52:01.043389 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043395 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.043401 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.043408 | orchestrator | 2025-06-02 00:52:01.043414 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 00:52:01.043440 | orchestrator | Monday 02 June 2025 00:45:08 +0000 (0:00:00.236) 0:03:26.661 *********** 2025-06-02 00:52:01.043447 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.043454 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.043460 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.043467 | orchestrator | 2025-06-02 00:52:01.043473 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 00:52:01.043479 | orchestrator | Monday 02 June 2025 00:45:09 +0000 (0:00:00.949) 0:03:27.610 *********** 2025-06-02 00:52:01.043485 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.043492 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.043498 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.043504 | orchestrator | 2025-06-02 00:52:01.043511 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 00:52:01.043517 | orchestrator | Monday 02 June 2025 00:45:10 +0000 (0:00:00.756) 0:03:28.367 *********** 2025-06-02 00:52:01.043527 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043534 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.043540 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.043546 | orchestrator | 2025-06-02 00:52:01.043552 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 00:52:01.043559 | orchestrator | Monday 02 June 2025 00:45:10 +0000 (0:00:00.281) 0:03:28.649 *********** 2025-06-02 00:52:01.043565 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.043571 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.043577 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.043584 | orchestrator | 2025-06-02 00:52:01.043590 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 00:52:01.043596 | orchestrator | Monday 02 June 2025 00:45:10 +0000 (0:00:00.285) 0:03:28.934 *********** 2025-06-02 00:52:01.043602 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043609 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.043615 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.043621 | orchestrator | 2025-06-02 00:52:01.043627 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 00:52:01.043634 | orchestrator | Monday 02 June 2025 00:45:11 +0000 (0:00:00.505) 0:03:29.440 *********** 2025-06-02 00:52:01.043640 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043646 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.043653 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.043659 | orchestrator | 2025-06-02 00:52:01.043665 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 00:52:01.043671 | orchestrator | Monday 02 June 2025 00:45:11 +0000 (0:00:00.285) 0:03:29.726 *********** 2025-06-02 00:52:01.043677 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043684 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.043690 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.043696 | orchestrator | 2025-06-02 00:52:01.043703 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 00:52:01.043709 | orchestrator | Monday 02 June 2025 00:45:11 +0000 (0:00:00.294) 0:03:30.020 *********** 2025-06-02 00:52:01.043715 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043721 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.043727 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.043734 | orchestrator | 2025-06-02 00:52:01.043740 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 00:52:01.043746 | orchestrator | Monday 02 June 2025 00:45:12 +0000 (0:00:00.282) 0:03:30.303 *********** 2025-06-02 00:52:01.043752 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043759 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.043765 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.043771 | orchestrator | 2025-06-02 00:52:01.043778 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 00:52:01.043784 | orchestrator | Monday 02 June 2025 00:45:12 +0000 (0:00:00.537) 0:03:30.841 *********** 2025-06-02 00:52:01.043790 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.043796 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.043803 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.043809 | orchestrator | 2025-06-02 00:52:01.043815 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 00:52:01.043822 | orchestrator | Monday 02 June 2025 00:45:13 +0000 (0:00:00.367) 0:03:31.208 *********** 2025-06-02 00:52:01.043828 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.043834 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.043840 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.043846 | orchestrator | 2025-06-02 00:52:01.043853 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 00:52:01.043859 | orchestrator | Monday 02 June 2025 00:45:13 +0000 (0:00:00.443) 0:03:31.652 *********** 2025-06-02 00:52:01.043865 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.043871 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.043883 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.043890 | orchestrator | 2025-06-02 00:52:01.043896 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-02 00:52:01.043902 | orchestrator | Monday 02 June 2025 00:45:14 +0000 (0:00:00.572) 0:03:32.224 *********** 2025-06-02 00:52:01.043908 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.043917 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.043924 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.043930 | orchestrator | 2025-06-02 00:52:01.043948 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-02 00:52:01.043956 | orchestrator | Monday 02 June 2025 00:45:14 +0000 (0:00:00.269) 0:03:32.494 *********** 2025-06-02 00:52:01.043962 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.043968 | orchestrator | 2025-06-02 00:52:01.043974 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-02 00:52:01.043980 | orchestrator | Monday 02 June 2025 00:45:14 +0000 (0:00:00.474) 0:03:32.968 *********** 2025-06-02 00:52:01.043987 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.043993 | orchestrator | 2025-06-02 00:52:01.043999 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-02 00:52:01.044005 | orchestrator | Monday 02 June 2025 00:45:14 +0000 (0:00:00.105) 0:03:33.074 *********** 2025-06-02 00:52:01.044012 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 00:52:01.044018 | orchestrator | 2025-06-02 00:52:01.044041 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-02 00:52:01.044049 | orchestrator | Monday 02 June 2025 00:45:16 +0000 (0:00:01.057) 0:03:34.131 *********** 2025-06-02 00:52:01.044055 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.044061 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.044068 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.044074 | orchestrator | 2025-06-02 00:52:01.044080 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-02 00:52:01.044087 | orchestrator | Monday 02 June 2025 00:45:16 +0000 (0:00:00.298) 0:03:34.429 *********** 2025-06-02 00:52:01.044093 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.044099 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.044105 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.044112 | orchestrator | 2025-06-02 00:52:01.044118 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-02 00:52:01.044124 | orchestrator | Monday 02 June 2025 00:45:16 +0000 (0:00:00.274) 0:03:34.704 *********** 2025-06-02 00:52:01.044130 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.044137 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.044143 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.044149 | orchestrator | 2025-06-02 00:52:01.044156 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-02 00:52:01.044162 | orchestrator | Monday 02 June 2025 00:45:17 +0000 (0:00:01.266) 0:03:35.971 *********** 2025-06-02 00:52:01.044168 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.044174 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.044181 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.044187 | orchestrator | 2025-06-02 00:52:01.044193 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-02 00:52:01.044199 | orchestrator | Monday 02 June 2025 00:45:18 +0000 (0:00:00.875) 0:03:36.847 *********** 2025-06-02 00:52:01.044206 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.044212 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.044218 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.044224 | orchestrator | 2025-06-02 00:52:01.044231 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-02 00:52:01.044237 | orchestrator | Monday 02 June 2025 00:45:19 +0000 (0:00:00.588) 0:03:37.435 *********** 2025-06-02 00:52:01.044243 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.044254 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.044261 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.044267 | orchestrator | 2025-06-02 00:52:01.044274 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-02 00:52:01.044280 | orchestrator | Monday 02 June 2025 00:45:19 +0000 (0:00:00.615) 0:03:38.050 *********** 2025-06-02 00:52:01.044286 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.044292 | orchestrator | 2025-06-02 00:52:01.044298 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-02 00:52:01.044304 | orchestrator | Monday 02 June 2025 00:45:21 +0000 (0:00:01.117) 0:03:39.168 *********** 2025-06-02 00:52:01.044311 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.044317 | orchestrator | 2025-06-02 00:52:01.044323 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-02 00:52:01.044329 | orchestrator | Monday 02 June 2025 00:45:21 +0000 (0:00:00.619) 0:03:39.787 *********** 2025-06-02 00:52:01.044335 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 00:52:01.044342 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.044348 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.044354 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 00:52:01.044361 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-02 00:52:01.044367 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 00:52:01.044373 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 00:52:01.044379 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-02 00:52:01.044386 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-02 00:52:01.044392 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-02 00:52:01.044398 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 00:52:01.044405 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-02 00:52:01.044411 | orchestrator | 2025-06-02 00:52:01.044417 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-02 00:52:01.044424 | orchestrator | Monday 02 June 2025 00:45:24 +0000 (0:00:03.081) 0:03:42.869 *********** 2025-06-02 00:52:01.044430 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.044436 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.044442 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.044449 | orchestrator | 2025-06-02 00:52:01.044455 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-02 00:52:01.044465 | orchestrator | Monday 02 June 2025 00:45:26 +0000 (0:00:01.326) 0:03:44.195 *********** 2025-06-02 00:52:01.044471 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.044477 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.044484 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.044490 | orchestrator | 2025-06-02 00:52:01.044496 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-02 00:52:01.044502 | orchestrator | Monday 02 June 2025 00:45:26 +0000 (0:00:00.293) 0:03:44.488 *********** 2025-06-02 00:52:01.044508 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.044515 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.044521 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.044527 | orchestrator | 2025-06-02 00:52:01.044534 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-02 00:52:01.044540 | orchestrator | Monday 02 June 2025 00:45:26 +0000 (0:00:00.258) 0:03:44.747 *********** 2025-06-02 00:52:01.044546 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.044552 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.044559 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.044565 | orchestrator | 2025-06-02 00:52:01.044571 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-02 00:52:01.044594 | orchestrator | Monday 02 June 2025 00:45:28 +0000 (0:00:01.567) 0:03:46.314 *********** 2025-06-02 00:52:01.044605 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.044611 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.044618 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.044624 | orchestrator | 2025-06-02 00:52:01.044630 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-02 00:52:01.044636 | orchestrator | Monday 02 June 2025 00:45:29 +0000 (0:00:01.377) 0:03:47.692 *********** 2025-06-02 00:52:01.044643 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.044649 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.044655 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.044661 | orchestrator | 2025-06-02 00:52:01.044668 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-02 00:52:01.044674 | orchestrator | Monday 02 June 2025 00:45:29 +0000 (0:00:00.265) 0:03:47.958 *********** 2025-06-02 00:52:01.044680 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.044686 | orchestrator | 2025-06-02 00:52:01.044692 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-02 00:52:01.044699 | orchestrator | Monday 02 June 2025 00:45:30 +0000 (0:00:00.437) 0:03:48.396 *********** 2025-06-02 00:52:01.044705 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.044711 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.044717 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.044724 | orchestrator | 2025-06-02 00:52:01.044730 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-02 00:52:01.044736 | orchestrator | Monday 02 June 2025 00:45:30 +0000 (0:00:00.392) 0:03:48.788 *********** 2025-06-02 00:52:01.044742 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.044748 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.044755 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.044761 | orchestrator | 2025-06-02 00:52:01.044767 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-02 00:52:01.044773 | orchestrator | Monday 02 June 2025 00:45:30 +0000 (0:00:00.249) 0:03:49.037 *********** 2025-06-02 00:52:01.044779 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.044786 | orchestrator | 2025-06-02 00:52:01.044792 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-02 00:52:01.044798 | orchestrator | Monday 02 June 2025 00:45:31 +0000 (0:00:00.469) 0:03:49.507 *********** 2025-06-02 00:52:01.044804 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.044810 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.044817 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.044823 | orchestrator | 2025-06-02 00:52:01.044829 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-02 00:52:01.044835 | orchestrator | Monday 02 June 2025 00:45:33 +0000 (0:00:01.818) 0:03:51.326 *********** 2025-06-02 00:52:01.044841 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.044848 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.044854 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.044860 | orchestrator | 2025-06-02 00:52:01.044866 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-02 00:52:01.044872 | orchestrator | Monday 02 June 2025 00:45:34 +0000 (0:00:01.123) 0:03:52.449 *********** 2025-06-02 00:52:01.044878 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.044885 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.044891 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.044897 | orchestrator | 2025-06-02 00:52:01.044903 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-02 00:52:01.044909 | orchestrator | Monday 02 June 2025 00:45:36 +0000 (0:00:01.822) 0:03:54.272 *********** 2025-06-02 00:52:01.044915 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.044922 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.044931 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.044968 | orchestrator | 2025-06-02 00:52:01.044976 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-02 00:52:01.044983 | orchestrator | Monday 02 June 2025 00:45:38 +0000 (0:00:02.049) 0:03:56.322 *********** 2025-06-02 00:52:01.044989 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.044995 | orchestrator | 2025-06-02 00:52:01.045002 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-02 00:52:01.045008 | orchestrator | Monday 02 June 2025 00:45:38 +0000 (0:00:00.742) 0:03:57.064 *********** 2025-06-02 00:52:01.045014 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-02 00:52:01.045020 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.045027 | orchestrator | 2025-06-02 00:52:01.045033 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-02 00:52:01.045042 | orchestrator | Monday 02 June 2025 00:46:00 +0000 (0:00:21.794) 0:04:18.859 *********** 2025-06-02 00:52:01.045049 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.045055 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.045061 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.045068 | orchestrator | 2025-06-02 00:52:01.045074 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-02 00:52:01.045080 | orchestrator | Monday 02 June 2025 00:46:10 +0000 (0:00:09.970) 0:04:28.829 *********** 2025-06-02 00:52:01.045086 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045092 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045099 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.045105 | orchestrator | 2025-06-02 00:52:01.045111 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-02 00:52:01.045117 | orchestrator | Monday 02 June 2025 00:46:11 +0000 (0:00:00.524) 0:04:29.354 *********** 2025-06-02 00:52:01.045144 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d042b0737be5b0ebc37365e09e689a1653b1c83a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-02 00:52:01.045152 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d042b0737be5b0ebc37365e09e689a1653b1c83a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-02 00:52:01.045160 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d042b0737be5b0ebc37365e09e689a1653b1c83a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-02 00:52:01.045167 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d042b0737be5b0ebc37365e09e689a1653b1c83a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-02 00:52:01.045174 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d042b0737be5b0ebc37365e09e689a1653b1c83a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-02 00:52:01.045185 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d042b0737be5b0ebc37365e09e689a1653b1c83a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d042b0737be5b0ebc37365e09e689a1653b1c83a'}])  2025-06-02 00:52:01.045192 | orchestrator | 2025-06-02 00:52:01.045198 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 00:52:01.045205 | orchestrator | Monday 02 June 2025 00:46:25 +0000 (0:00:14.454) 0:04:43.808 *********** 2025-06-02 00:52:01.045211 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045217 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045224 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.045230 | orchestrator | 2025-06-02 00:52:01.045236 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 00:52:01.045242 | orchestrator | Monday 02 June 2025 00:46:26 +0000 (0:00:00.389) 0:04:44.197 *********** 2025-06-02 00:52:01.045249 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.045255 | orchestrator | 2025-06-02 00:52:01.045261 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 00:52:01.045267 | orchestrator | Monday 02 June 2025 00:46:26 +0000 (0:00:00.662) 0:04:44.860 *********** 2025-06-02 00:52:01.045273 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.045280 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.045286 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.045292 | orchestrator | 2025-06-02 00:52:01.045298 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 00:52:01.045304 | orchestrator | Monday 02 June 2025 00:46:27 +0000 (0:00:00.313) 0:04:45.173 *********** 2025-06-02 00:52:01.045310 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045317 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045323 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.045329 | orchestrator | 2025-06-02 00:52:01.045338 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 00:52:01.045343 | orchestrator | Monday 02 June 2025 00:46:27 +0000 (0:00:00.323) 0:04:45.497 *********** 2025-06-02 00:52:01.045349 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 00:52:01.045354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 00:52:01.045360 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 00:52:01.045366 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045371 | orchestrator | 2025-06-02 00:52:01.045376 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 00:52:01.045382 | orchestrator | Monday 02 June 2025 00:46:28 +0000 (0:00:00.719) 0:04:46.216 *********** 2025-06-02 00:52:01.045387 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.045393 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.045398 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.045404 | orchestrator | 2025-06-02 00:52:01.045409 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-02 00:52:01.045415 | orchestrator | 2025-06-02 00:52:01.045420 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 00:52:01.045441 | orchestrator | Monday 02 June 2025 00:46:28 +0000 (0:00:00.626) 0:04:46.843 *********** 2025-06-02 00:52:01.045447 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.045453 | orchestrator | 2025-06-02 00:52:01.045458 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 00:52:01.045464 | orchestrator | Monday 02 June 2025 00:46:29 +0000 (0:00:00.373) 0:04:47.217 *********** 2025-06-02 00:52:01.045469 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.045478 | orchestrator | 2025-06-02 00:52:01.045484 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 00:52:01.045489 | orchestrator | Monday 02 June 2025 00:46:29 +0000 (0:00:00.534) 0:04:47.751 *********** 2025-06-02 00:52:01.045494 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.045500 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.045505 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.045511 | orchestrator | 2025-06-02 00:52:01.045516 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 00:52:01.045522 | orchestrator | Monday 02 June 2025 00:46:30 +0000 (0:00:00.715) 0:04:48.467 *********** 2025-06-02 00:52:01.045527 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045533 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045538 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.045544 | orchestrator | 2025-06-02 00:52:01.045549 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 00:52:01.045555 | orchestrator | Monday 02 June 2025 00:46:30 +0000 (0:00:00.218) 0:04:48.685 *********** 2025-06-02 00:52:01.045560 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045566 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045571 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.045577 | orchestrator | 2025-06-02 00:52:01.045582 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 00:52:01.045588 | orchestrator | Monday 02 June 2025 00:46:30 +0000 (0:00:00.364) 0:04:49.049 *********** 2025-06-02 00:52:01.045593 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045598 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045604 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.045609 | orchestrator | 2025-06-02 00:52:01.045615 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 00:52:01.045620 | orchestrator | Monday 02 June 2025 00:46:31 +0000 (0:00:00.222) 0:04:49.271 *********** 2025-06-02 00:52:01.045626 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.045631 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.045637 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.045642 | orchestrator | 2025-06-02 00:52:01.045648 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 00:52:01.045653 | orchestrator | Monday 02 June 2025 00:46:31 +0000 (0:00:00.600) 0:04:49.872 *********** 2025-06-02 00:52:01.045658 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045664 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045669 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.045675 | orchestrator | 2025-06-02 00:52:01.045680 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 00:52:01.045686 | orchestrator | Monday 02 June 2025 00:46:31 +0000 (0:00:00.237) 0:04:50.110 *********** 2025-06-02 00:52:01.045691 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045697 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045702 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.045708 | orchestrator | 2025-06-02 00:52:01.045713 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 00:52:01.045719 | orchestrator | Monday 02 June 2025 00:46:32 +0000 (0:00:00.390) 0:04:50.500 *********** 2025-06-02 00:52:01.045724 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.045730 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.045735 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.045741 | orchestrator | 2025-06-02 00:52:01.045746 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 00:52:01.045752 | orchestrator | Monday 02 June 2025 00:46:33 +0000 (0:00:00.624) 0:04:51.124 *********** 2025-06-02 00:52:01.045757 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.045763 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.045768 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.045777 | orchestrator | 2025-06-02 00:52:01.045782 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 00:52:01.045788 | orchestrator | Monday 02 June 2025 00:46:33 +0000 (0:00:00.658) 0:04:51.783 *********** 2025-06-02 00:52:01.045793 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045799 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045804 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.045810 | orchestrator | 2025-06-02 00:52:01.045815 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 00:52:01.045823 | orchestrator | Monday 02 June 2025 00:46:33 +0000 (0:00:00.263) 0:04:52.046 *********** 2025-06-02 00:52:01.045828 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.045834 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.045840 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.045845 | orchestrator | 2025-06-02 00:52:01.045851 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 00:52:01.045856 | orchestrator | Monday 02 June 2025 00:46:34 +0000 (0:00:00.448) 0:04:52.495 *********** 2025-06-02 00:52:01.045862 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045867 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045873 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.045878 | orchestrator | 2025-06-02 00:52:01.045883 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 00:52:01.045889 | orchestrator | Monday 02 June 2025 00:46:34 +0000 (0:00:00.283) 0:04:52.778 *********** 2025-06-02 00:52:01.045894 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045900 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045905 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.045911 | orchestrator | 2025-06-02 00:52:01.045916 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 00:52:01.045946 | orchestrator | Monday 02 June 2025 00:46:34 +0000 (0:00:00.269) 0:04:53.048 *********** 2025-06-02 00:52:01.045954 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045959 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045965 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.045970 | orchestrator | 2025-06-02 00:52:01.045976 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 00:52:01.045981 | orchestrator | Monday 02 June 2025 00:46:35 +0000 (0:00:00.357) 0:04:53.406 *********** 2025-06-02 00:52:01.045987 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.045992 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.045998 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.046003 | orchestrator | 2025-06-02 00:52:01.046009 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 00:52:01.046027 | orchestrator | Monday 02 June 2025 00:46:35 +0000 (0:00:00.415) 0:04:53.821 *********** 2025-06-02 00:52:01.046033 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.046040 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.046045 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.046051 | orchestrator | 2025-06-02 00:52:01.046056 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 00:52:01.046062 | orchestrator | Monday 02 June 2025 00:46:35 +0000 (0:00:00.260) 0:04:54.082 *********** 2025-06-02 00:52:01.046067 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.046073 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.046078 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.046084 | orchestrator | 2025-06-02 00:52:01.046089 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 00:52:01.046094 | orchestrator | Monday 02 June 2025 00:46:36 +0000 (0:00:00.372) 0:04:54.455 *********** 2025-06-02 00:52:01.046100 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.046105 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.046111 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.046116 | orchestrator | 2025-06-02 00:52:01.046122 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 00:52:01.046131 | orchestrator | Monday 02 June 2025 00:46:36 +0000 (0:00:00.426) 0:04:54.881 *********** 2025-06-02 00:52:01.046136 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.046142 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.046147 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.046152 | orchestrator | 2025-06-02 00:52:01.046158 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-02 00:52:01.046163 | orchestrator | Monday 02 June 2025 00:46:37 +0000 (0:00:00.746) 0:04:55.628 *********** 2025-06-02 00:52:01.046168 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 00:52:01.046174 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 00:52:01.046180 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 00:52:01.046185 | orchestrator | 2025-06-02 00:52:01.046190 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-02 00:52:01.046196 | orchestrator | Monday 02 June 2025 00:46:37 +0000 (0:00:00.489) 0:04:56.118 *********** 2025-06-02 00:52:01.046201 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.046207 | orchestrator | 2025-06-02 00:52:01.046212 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-02 00:52:01.046218 | orchestrator | Monday 02 June 2025 00:46:38 +0000 (0:00:00.431) 0:04:56.549 *********** 2025-06-02 00:52:01.046223 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.046229 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.046234 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.046240 | orchestrator | 2025-06-02 00:52:01.046245 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-02 00:52:01.046250 | orchestrator | Monday 02 June 2025 00:46:39 +0000 (0:00:00.886) 0:04:57.436 *********** 2025-06-02 00:52:01.046256 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.046261 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.046267 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.046272 | orchestrator | 2025-06-02 00:52:01.046278 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-02 00:52:01.046283 | orchestrator | Monday 02 June 2025 00:46:39 +0000 (0:00:00.321) 0:04:57.757 *********** 2025-06-02 00:52:01.046289 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 00:52:01.046294 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 00:52:01.046300 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 00:52:01.046305 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-02 00:52:01.046311 | orchestrator | 2025-06-02 00:52:01.046316 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-02 00:52:01.046322 | orchestrator | Monday 02 June 2025 00:46:49 +0000 (0:00:10.202) 0:05:07.959 *********** 2025-06-02 00:52:01.046330 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.046336 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.046341 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.046347 | orchestrator | 2025-06-02 00:52:01.046352 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-02 00:52:01.046358 | orchestrator | Monday 02 June 2025 00:46:50 +0000 (0:00:00.445) 0:05:08.405 *********** 2025-06-02 00:52:01.046363 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 00:52:01.046369 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 00:52:01.046374 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 00:52:01.046380 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 00:52:01.046385 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.046391 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.046396 | orchestrator | 2025-06-02 00:52:01.046402 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-02 00:52:01.046410 | orchestrator | Monday 02 June 2025 00:46:53 +0000 (0:00:02.869) 0:05:11.274 *********** 2025-06-02 00:52:01.046432 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 00:52:01.046438 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 00:52:01.046444 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 00:52:01.046450 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 00:52:01.046455 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 00:52:01.046461 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 00:52:01.046466 | orchestrator | 2025-06-02 00:52:01.046472 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-02 00:52:01.046478 | orchestrator | Monday 02 June 2025 00:46:54 +0000 (0:00:01.171) 0:05:12.446 *********** 2025-06-02 00:52:01.046483 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.046489 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.046494 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.046500 | orchestrator | 2025-06-02 00:52:01.046506 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-02 00:52:01.046511 | orchestrator | Monday 02 June 2025 00:46:54 +0000 (0:00:00.565) 0:05:13.012 *********** 2025-06-02 00:52:01.046517 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.046522 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.046528 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.046533 | orchestrator | 2025-06-02 00:52:01.046539 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-02 00:52:01.046545 | orchestrator | Monday 02 June 2025 00:46:55 +0000 (0:00:00.285) 0:05:13.297 *********** 2025-06-02 00:52:01.046550 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.046556 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.046561 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.046567 | orchestrator | 2025-06-02 00:52:01.046572 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-02 00:52:01.046578 | orchestrator | Monday 02 June 2025 00:46:55 +0000 (0:00:00.459) 0:05:13.756 *********** 2025-06-02 00:52:01.046584 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.046589 | orchestrator | 2025-06-02 00:52:01.046595 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-02 00:52:01.046600 | orchestrator | Monday 02 June 2025 00:46:56 +0000 (0:00:00.410) 0:05:14.167 *********** 2025-06-02 00:52:01.046606 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.046611 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.046617 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.046622 | orchestrator | 2025-06-02 00:52:01.046628 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-02 00:52:01.046633 | orchestrator | Monday 02 June 2025 00:46:56 +0000 (0:00:00.300) 0:05:14.467 *********** 2025-06-02 00:52:01.046639 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.046645 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.046650 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.046656 | orchestrator | 2025-06-02 00:52:01.046661 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-02 00:52:01.046667 | orchestrator | Monday 02 June 2025 00:46:56 +0000 (0:00:00.274) 0:05:14.742 *********** 2025-06-02 00:52:01.046672 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.046678 | orchestrator | 2025-06-02 00:52:01.046683 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-02 00:52:01.046689 | orchestrator | Monday 02 June 2025 00:46:57 +0000 (0:00:00.553) 0:05:15.296 *********** 2025-06-02 00:52:01.046694 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.046700 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.046711 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.046717 | orchestrator | 2025-06-02 00:52:01.046722 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-02 00:52:01.046728 | orchestrator | Monday 02 June 2025 00:46:58 +0000 (0:00:01.075) 0:05:16.371 *********** 2025-06-02 00:52:01.046733 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.046739 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.046744 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.046750 | orchestrator | 2025-06-02 00:52:01.046755 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-02 00:52:01.046761 | orchestrator | Monday 02 June 2025 00:46:59 +0000 (0:00:01.053) 0:05:17.425 *********** 2025-06-02 00:52:01.046766 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.046772 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.046777 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.046783 | orchestrator | 2025-06-02 00:52:01.046788 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-02 00:52:01.046794 | orchestrator | Monday 02 June 2025 00:47:01 +0000 (0:00:01.877) 0:05:19.303 *********** 2025-06-02 00:52:01.046800 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.046805 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.046814 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.046819 | orchestrator | 2025-06-02 00:52:01.046825 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-02 00:52:01.046830 | orchestrator | Monday 02 June 2025 00:47:03 +0000 (0:00:01.968) 0:05:21.271 *********** 2025-06-02 00:52:01.046836 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.046842 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.046847 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-02 00:52:01.046853 | orchestrator | 2025-06-02 00:52:01.046858 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-02 00:52:01.046864 | orchestrator | Monday 02 June 2025 00:47:03 +0000 (0:00:00.427) 0:05:21.699 *********** 2025-06-02 00:52:01.046869 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-02 00:52:01.046875 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-02 00:52:01.046894 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-02 00:52:01.046901 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-02 00:52:01.046906 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-02 00:52:01.046912 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 00:52:01.046917 | orchestrator | 2025-06-02 00:52:01.046923 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-02 00:52:01.046928 | orchestrator | Monday 02 June 2025 00:47:33 +0000 (0:00:30.034) 0:05:51.733 *********** 2025-06-02 00:52:01.046933 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 00:52:01.046953 | orchestrator | 2025-06-02 00:52:01.046963 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-02 00:52:01.046971 | orchestrator | Monday 02 June 2025 00:47:35 +0000 (0:00:01.587) 0:05:53.320 *********** 2025-06-02 00:52:01.046977 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.046982 | orchestrator | 2025-06-02 00:52:01.046988 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-02 00:52:01.046993 | orchestrator | Monday 02 June 2025 00:47:36 +0000 (0:00:00.877) 0:05:54.198 *********** 2025-06-02 00:52:01.046998 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.047004 | orchestrator | 2025-06-02 00:52:01.047009 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-02 00:52:01.047015 | orchestrator | Monday 02 June 2025 00:47:36 +0000 (0:00:00.150) 0:05:54.348 *********** 2025-06-02 00:52:01.047024 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-02 00:52:01.047030 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-02 00:52:01.047035 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-02 00:52:01.047040 | orchestrator | 2025-06-02 00:52:01.047046 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-02 00:52:01.047051 | orchestrator | Monday 02 June 2025 00:47:42 +0000 (0:00:06.407) 0:06:00.755 *********** 2025-06-02 00:52:01.047057 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-02 00:52:01.047062 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-02 00:52:01.047068 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-02 00:52:01.047073 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-02 00:52:01.047078 | orchestrator | 2025-06-02 00:52:01.047084 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 00:52:01.047089 | orchestrator | Monday 02 June 2025 00:47:47 +0000 (0:00:04.627) 0:06:05.383 *********** 2025-06-02 00:52:01.047095 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.047100 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.047106 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.047111 | orchestrator | 2025-06-02 00:52:01.047117 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 00:52:01.047122 | orchestrator | Monday 02 June 2025 00:47:48 +0000 (0:00:00.874) 0:06:06.257 *********** 2025-06-02 00:52:01.047127 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:52:01.047133 | orchestrator | 2025-06-02 00:52:01.047138 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 00:52:01.047144 | orchestrator | Monday 02 June 2025 00:47:48 +0000 (0:00:00.476) 0:06:06.734 *********** 2025-06-02 00:52:01.047149 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.047155 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.047160 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.047166 | orchestrator | 2025-06-02 00:52:01.047171 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 00:52:01.047176 | orchestrator | Monday 02 June 2025 00:47:48 +0000 (0:00:00.289) 0:06:07.024 *********** 2025-06-02 00:52:01.047182 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.047187 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.047193 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.047198 | orchestrator | 2025-06-02 00:52:01.047204 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 00:52:01.047209 | orchestrator | Monday 02 June 2025 00:47:50 +0000 (0:00:01.386) 0:06:08.410 *********** 2025-06-02 00:52:01.047215 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 00:52:01.047220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 00:52:01.047226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 00:52:01.047234 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.047239 | orchestrator | 2025-06-02 00:52:01.047245 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 00:52:01.047251 | orchestrator | Monday 02 June 2025 00:47:50 +0000 (0:00:00.565) 0:06:08.976 *********** 2025-06-02 00:52:01.047256 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.047262 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.047267 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.047272 | orchestrator | 2025-06-02 00:52:01.047278 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-02 00:52:01.047283 | orchestrator | 2025-06-02 00:52:01.047289 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 00:52:01.047297 | orchestrator | Monday 02 June 2025 00:47:51 +0000 (0:00:00.539) 0:06:09.516 *********** 2025-06-02 00:52:01.047303 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.047308 | orchestrator | 2025-06-02 00:52:01.047313 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 00:52:01.047337 | orchestrator | Monday 02 June 2025 00:47:52 +0000 (0:00:00.676) 0:06:10.192 *********** 2025-06-02 00:52:01.047343 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.047349 | orchestrator | 2025-06-02 00:52:01.047354 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 00:52:01.047360 | orchestrator | Monday 02 June 2025 00:47:52 +0000 (0:00:00.497) 0:06:10.689 *********** 2025-06-02 00:52:01.047365 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.047371 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.047376 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.047381 | orchestrator | 2025-06-02 00:52:01.047387 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 00:52:01.047392 | orchestrator | Monday 02 June 2025 00:47:52 +0000 (0:00:00.327) 0:06:11.017 *********** 2025-06-02 00:52:01.047398 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.047403 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.047409 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.047414 | orchestrator | 2025-06-02 00:52:01.047419 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 00:52:01.047425 | orchestrator | Monday 02 June 2025 00:47:53 +0000 (0:00:00.977) 0:06:11.994 *********** 2025-06-02 00:52:01.047430 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.047436 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.047441 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.047447 | orchestrator | 2025-06-02 00:52:01.047452 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 00:52:01.047457 | orchestrator | Monday 02 June 2025 00:47:54 +0000 (0:00:00.654) 0:06:12.649 *********** 2025-06-02 00:52:01.047463 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.047468 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.047474 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.047479 | orchestrator | 2025-06-02 00:52:01.047485 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 00:52:01.047490 | orchestrator | Monday 02 June 2025 00:47:55 +0000 (0:00:00.653) 0:06:13.303 *********** 2025-06-02 00:52:01.047496 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.047501 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.047507 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.047512 | orchestrator | 2025-06-02 00:52:01.047518 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 00:52:01.047523 | orchestrator | Monday 02 June 2025 00:47:55 +0000 (0:00:00.280) 0:06:13.584 *********** 2025-06-02 00:52:01.047529 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.047534 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.047539 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.047545 | orchestrator | 2025-06-02 00:52:01.047550 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 00:52:01.047556 | orchestrator | Monday 02 June 2025 00:47:56 +0000 (0:00:00.572) 0:06:14.156 *********** 2025-06-02 00:52:01.047561 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.047566 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.047572 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.047577 | orchestrator | 2025-06-02 00:52:01.047583 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 00:52:01.047588 | orchestrator | Monday 02 June 2025 00:47:56 +0000 (0:00:00.294) 0:06:14.451 *********** 2025-06-02 00:52:01.047593 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.047602 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.047608 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.047613 | orchestrator | 2025-06-02 00:52:01.047619 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 00:52:01.047624 | orchestrator | Monday 02 June 2025 00:47:56 +0000 (0:00:00.645) 0:06:15.096 *********** 2025-06-02 00:52:01.047630 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.047635 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.047641 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.047646 | orchestrator | 2025-06-02 00:52:01.047651 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 00:52:01.047657 | orchestrator | Monday 02 June 2025 00:47:57 +0000 (0:00:00.631) 0:06:15.727 *********** 2025-06-02 00:52:01.047662 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.047668 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.047673 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.047679 | orchestrator | 2025-06-02 00:52:01.047685 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 00:52:01.047690 | orchestrator | Monday 02 June 2025 00:47:58 +0000 (0:00:00.493) 0:06:16.221 *********** 2025-06-02 00:52:01.047696 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.047701 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.047707 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.047712 | orchestrator | 2025-06-02 00:52:01.047718 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 00:52:01.047723 | orchestrator | Monday 02 June 2025 00:47:58 +0000 (0:00:00.292) 0:06:16.513 *********** 2025-06-02 00:52:01.047731 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.047737 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.047742 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.047747 | orchestrator | 2025-06-02 00:52:01.047753 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 00:52:01.047758 | orchestrator | Monday 02 June 2025 00:47:58 +0000 (0:00:00.295) 0:06:16.808 *********** 2025-06-02 00:52:01.047764 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.047769 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.047775 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.047781 | orchestrator | 2025-06-02 00:52:01.047786 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 00:52:01.047791 | orchestrator | Monday 02 June 2025 00:47:58 +0000 (0:00:00.305) 0:06:17.114 *********** 2025-06-02 00:52:01.047797 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.047802 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.047808 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.047813 | orchestrator | 2025-06-02 00:52:01.047819 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 00:52:01.047824 | orchestrator | Monday 02 June 2025 00:47:59 +0000 (0:00:00.542) 0:06:17.657 *********** 2025-06-02 00:52:01.047832 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.047838 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.047843 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.047849 | orchestrator | 2025-06-02 00:52:01.047854 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 00:52:01.047860 | orchestrator | Monday 02 June 2025 00:47:59 +0000 (0:00:00.292) 0:06:17.949 *********** 2025-06-02 00:52:01.047865 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.047871 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.047876 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.047882 | orchestrator | 2025-06-02 00:52:01.047887 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 00:52:01.047893 | orchestrator | Monday 02 June 2025 00:48:00 +0000 (0:00:00.281) 0:06:18.231 *********** 2025-06-02 00:52:01.047898 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.047904 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.047909 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.047918 | orchestrator | 2025-06-02 00:52:01.047923 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 00:52:01.047929 | orchestrator | Monday 02 June 2025 00:48:00 +0000 (0:00:00.279) 0:06:18.510 *********** 2025-06-02 00:52:01.047934 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.047953 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.047959 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.047964 | orchestrator | 2025-06-02 00:52:01.047970 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 00:52:01.047975 | orchestrator | Monday 02 June 2025 00:48:00 +0000 (0:00:00.546) 0:06:19.056 *********** 2025-06-02 00:52:01.047981 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.047986 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.047992 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.047997 | orchestrator | 2025-06-02 00:52:01.048003 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-02 00:52:01.048008 | orchestrator | Monday 02 June 2025 00:48:01 +0000 (0:00:00.505) 0:06:19.562 *********** 2025-06-02 00:52:01.048014 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.048019 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.048024 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.048030 | orchestrator | 2025-06-02 00:52:01.048035 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-02 00:52:01.048041 | orchestrator | Monday 02 June 2025 00:48:01 +0000 (0:00:00.286) 0:06:19.848 *********** 2025-06-02 00:52:01.048046 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 00:52:01.048051 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 00:52:01.048057 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 00:52:01.048062 | orchestrator | 2025-06-02 00:52:01.048068 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-02 00:52:01.048073 | orchestrator | Monday 02 June 2025 00:48:02 +0000 (0:00:00.814) 0:06:20.662 *********** 2025-06-02 00:52:01.048078 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.048084 | orchestrator | 2025-06-02 00:52:01.048089 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-02 00:52:01.048095 | orchestrator | Monday 02 June 2025 00:48:03 +0000 (0:00:00.734) 0:06:21.397 *********** 2025-06-02 00:52:01.048100 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.048106 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.048111 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.048117 | orchestrator | 2025-06-02 00:52:01.048122 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-02 00:52:01.048127 | orchestrator | Monday 02 June 2025 00:48:03 +0000 (0:00:00.276) 0:06:21.674 *********** 2025-06-02 00:52:01.048133 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.048139 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.048144 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.048150 | orchestrator | 2025-06-02 00:52:01.048155 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-02 00:52:01.048161 | orchestrator | Monday 02 June 2025 00:48:03 +0000 (0:00:00.294) 0:06:21.968 *********** 2025-06-02 00:52:01.048166 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.048172 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.048177 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.048182 | orchestrator | 2025-06-02 00:52:01.048188 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-02 00:52:01.048193 | orchestrator | Monday 02 June 2025 00:48:04 +0000 (0:00:00.937) 0:06:22.906 *********** 2025-06-02 00:52:01.048199 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.048204 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.048210 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.048219 | orchestrator | 2025-06-02 00:52:01.048224 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-02 00:52:01.048232 | orchestrator | Monday 02 June 2025 00:48:05 +0000 (0:00:00.329) 0:06:23.235 *********** 2025-06-02 00:52:01.048238 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 00:52:01.048243 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 00:52:01.048249 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 00:52:01.048255 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 00:52:01.048260 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 00:52:01.048265 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 00:52:01.048271 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 00:52:01.048280 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 00:52:01.048285 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 00:52:01.048291 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 00:52:01.048296 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 00:52:01.048302 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 00:52:01.048307 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 00:52:01.048312 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 00:52:01.048318 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 00:52:01.048323 | orchestrator | 2025-06-02 00:52:01.048329 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-02 00:52:01.048334 | orchestrator | Monday 02 June 2025 00:48:08 +0000 (0:00:03.070) 0:06:26.306 *********** 2025-06-02 00:52:01.048340 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.048345 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.048351 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.048356 | orchestrator | 2025-06-02 00:52:01.048361 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-02 00:52:01.048367 | orchestrator | Monday 02 June 2025 00:48:08 +0000 (0:00:00.297) 0:06:26.603 *********** 2025-06-02 00:52:01.048372 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.048378 | orchestrator | 2025-06-02 00:52:01.048383 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-02 00:52:01.048389 | orchestrator | Monday 02 June 2025 00:48:09 +0000 (0:00:00.717) 0:06:27.321 *********** 2025-06-02 00:52:01.048394 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 00:52:01.048399 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 00:52:01.048405 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 00:52:01.048410 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-02 00:52:01.048416 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-02 00:52:01.048421 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-02 00:52:01.048427 | orchestrator | 2025-06-02 00:52:01.048432 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-02 00:52:01.048438 | orchestrator | Monday 02 June 2025 00:48:10 +0000 (0:00:00.965) 0:06:28.287 *********** 2025-06-02 00:52:01.048443 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.048452 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 00:52:01.048457 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 00:52:01.048463 | orchestrator | 2025-06-02 00:52:01.048468 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-02 00:52:01.048473 | orchestrator | Monday 02 June 2025 00:48:12 +0000 (0:00:02.206) 0:06:30.493 *********** 2025-06-02 00:52:01.048479 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 00:52:01.048484 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 00:52:01.048490 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.048495 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 00:52:01.048501 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 00:52:01.048506 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.048512 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 00:52:01.048517 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 00:52:01.048523 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.048528 | orchestrator | 2025-06-02 00:52:01.048533 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-02 00:52:01.048539 | orchestrator | Monday 02 June 2025 00:48:13 +0000 (0:00:01.454) 0:06:31.948 *********** 2025-06-02 00:52:01.048544 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 00:52:01.048550 | orchestrator | 2025-06-02 00:52:01.048555 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-02 00:52:01.048560 | orchestrator | Monday 02 June 2025 00:48:15 +0000 (0:00:02.068) 0:06:34.017 *********** 2025-06-02 00:52:01.048566 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.048572 | orchestrator | 2025-06-02 00:52:01.048579 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-02 00:52:01.048585 | orchestrator | Monday 02 June 2025 00:48:16 +0000 (0:00:00.594) 0:06:34.611 *********** 2025-06-02 00:52:01.048590 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-89fe9f69-ec16-58f3-8212-bc080cf4c28c', 'data_vg': 'ceph-89fe9f69-ec16-58f3-8212-bc080cf4c28c'}) 2025-06-02 00:52:01.048596 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3a2aacf8-31c8-546a-a559-f7f9618b27d4', 'data_vg': 'ceph-3a2aacf8-31c8-546a-a559-f7f9618b27d4'}) 2025-06-02 00:52:01.048602 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644', 'data_vg': 'ceph-93d4fc0b-cb5c-5d00-94e8-8a1d2b9f8644'}) 2025-06-02 00:52:01.048610 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3a308c11-b64c-503e-b49b-4b3a12050ecf', 'data_vg': 'ceph-3a308c11-b64c-503e-b49b-4b3a12050ecf'}) 2025-06-02 00:52:01.048616 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-17a6e190-aa70-5b53-9f6a-9d016360bd22', 'data_vg': 'ceph-17a6e190-aa70-5b53-9f6a-9d016360bd22'}) 2025-06-02 00:52:01.048621 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1905453d-e612-5c47-8424-6bc4888ba216', 'data_vg': 'ceph-1905453d-e612-5c47-8424-6bc4888ba216'}) 2025-06-02 00:52:01.048627 | orchestrator | 2025-06-02 00:52:01.048632 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-02 00:52:01.048638 | orchestrator | Monday 02 June 2025 00:48:54 +0000 (0:00:38.347) 0:07:12.959 *********** 2025-06-02 00:52:01.048643 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.048649 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.048654 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.048660 | orchestrator | 2025-06-02 00:52:01.048665 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-02 00:52:01.048671 | orchestrator | Monday 02 June 2025 00:48:55 +0000 (0:00:00.704) 0:07:13.664 *********** 2025-06-02 00:52:01.048676 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.048684 | orchestrator | 2025-06-02 00:52:01.048690 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-02 00:52:01.048695 | orchestrator | Monday 02 June 2025 00:48:56 +0000 (0:00:00.567) 0:07:14.231 *********** 2025-06-02 00:52:01.048701 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.048706 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.048712 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.048717 | orchestrator | 2025-06-02 00:52:01.048723 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-02 00:52:01.048728 | orchestrator | Monday 02 June 2025 00:48:56 +0000 (0:00:00.638) 0:07:14.869 *********** 2025-06-02 00:52:01.048734 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.048740 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.048745 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.048751 | orchestrator | 2025-06-02 00:52:01.048756 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-02 00:52:01.048761 | orchestrator | Monday 02 June 2025 00:48:59 +0000 (0:00:02.781) 0:07:17.651 *********** 2025-06-02 00:52:01.048767 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.048772 | orchestrator | 2025-06-02 00:52:01.048778 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-02 00:52:01.048783 | orchestrator | Monday 02 June 2025 00:49:00 +0000 (0:00:00.491) 0:07:18.142 *********** 2025-06-02 00:52:01.048788 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.048794 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.048800 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.048805 | orchestrator | 2025-06-02 00:52:01.048811 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-02 00:52:01.048816 | orchestrator | Monday 02 June 2025 00:49:01 +0000 (0:00:01.210) 0:07:19.353 *********** 2025-06-02 00:52:01.048821 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.048827 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.048833 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.048838 | orchestrator | 2025-06-02 00:52:01.048843 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-02 00:52:01.048849 | orchestrator | Monday 02 June 2025 00:49:02 +0000 (0:00:01.350) 0:07:20.703 *********** 2025-06-02 00:52:01.048854 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.048860 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.048865 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.048871 | orchestrator | 2025-06-02 00:52:01.048876 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-02 00:52:01.048882 | orchestrator | Monday 02 June 2025 00:49:04 +0000 (0:00:01.660) 0:07:22.364 *********** 2025-06-02 00:52:01.048887 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.048892 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.048898 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.048903 | orchestrator | 2025-06-02 00:52:01.048909 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-02 00:52:01.048914 | orchestrator | Monday 02 June 2025 00:49:04 +0000 (0:00:00.320) 0:07:22.684 *********** 2025-06-02 00:52:01.048920 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.048925 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.048931 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.048936 | orchestrator | 2025-06-02 00:52:01.048970 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-02 00:52:01.048975 | orchestrator | Monday 02 June 2025 00:49:04 +0000 (0:00:00.290) 0:07:22.975 *********** 2025-06-02 00:52:01.048981 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-06-02 00:52:01.048986 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-06-02 00:52:01.048995 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-06-02 00:52:01.049000 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-02 00:52:01.049009 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 00:52:01.049014 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-02 00:52:01.049020 | orchestrator | 2025-06-02 00:52:01.049025 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-02 00:52:01.049031 | orchestrator | Monday 02 June 2025 00:49:06 +0000 (0:00:01.190) 0:07:24.166 *********** 2025-06-02 00:52:01.049036 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-02 00:52:01.049042 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-02 00:52:01.049047 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-02 00:52:01.049052 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-02 00:52:01.049058 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-02 00:52:01.049063 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-02 00:52:01.049069 | orchestrator | 2025-06-02 00:52:01.049074 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-02 00:52:01.049083 | orchestrator | Monday 02 June 2025 00:49:08 +0000 (0:00:02.082) 0:07:26.248 *********** 2025-06-02 00:52:01.049089 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-02 00:52:01.049094 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-02 00:52:01.049100 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-02 00:52:01.049105 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-02 00:52:01.049111 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-02 00:52:01.049116 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-02 00:52:01.049122 | orchestrator | 2025-06-02 00:52:01.049127 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-02 00:52:01.049133 | orchestrator | Monday 02 June 2025 00:49:11 +0000 (0:00:03.379) 0:07:29.628 *********** 2025-06-02 00:52:01.049138 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049144 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.049150 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 00:52:01.049155 | orchestrator | 2025-06-02 00:52:01.049161 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-02 00:52:01.049166 | orchestrator | Monday 02 June 2025 00:49:13 +0000 (0:00:02.356) 0:07:31.984 *********** 2025-06-02 00:52:01.049172 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049177 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.049183 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-02 00:52:01.049188 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 00:52:01.049194 | orchestrator | 2025-06-02 00:52:01.049199 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-02 00:52:01.049205 | orchestrator | Monday 02 June 2025 00:49:26 +0000 (0:00:12.776) 0:07:44.761 *********** 2025-06-02 00:52:01.049210 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049216 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.049221 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.049227 | orchestrator | 2025-06-02 00:52:01.049232 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 00:52:01.049238 | orchestrator | Monday 02 June 2025 00:49:27 +0000 (0:00:00.786) 0:07:45.547 *********** 2025-06-02 00:52:01.049243 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049249 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.049254 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.049260 | orchestrator | 2025-06-02 00:52:01.049265 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 00:52:01.049271 | orchestrator | Monday 02 June 2025 00:49:28 +0000 (0:00:00.577) 0:07:46.125 *********** 2025-06-02 00:52:01.049276 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.049282 | orchestrator | 2025-06-02 00:52:01.049287 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 00:52:01.049298 | orchestrator | Monday 02 June 2025 00:49:28 +0000 (0:00:00.501) 0:07:46.627 *********** 2025-06-02 00:52:01.049303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:52:01.049309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:52:01.049314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:52:01.049320 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049325 | orchestrator | 2025-06-02 00:52:01.049331 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 00:52:01.049336 | orchestrator | Monday 02 June 2025 00:49:28 +0000 (0:00:00.376) 0:07:47.004 *********** 2025-06-02 00:52:01.049342 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049347 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.049353 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.049358 | orchestrator | 2025-06-02 00:52:01.049364 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 00:52:01.049369 | orchestrator | Monday 02 June 2025 00:49:29 +0000 (0:00:00.279) 0:07:47.283 *********** 2025-06-02 00:52:01.049375 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049380 | orchestrator | 2025-06-02 00:52:01.049386 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 00:52:01.049391 | orchestrator | Monday 02 June 2025 00:49:29 +0000 (0:00:00.206) 0:07:47.490 *********** 2025-06-02 00:52:01.049397 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049402 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.049408 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.049412 | orchestrator | 2025-06-02 00:52:01.049417 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 00:52:01.049422 | orchestrator | Monday 02 June 2025 00:49:29 +0000 (0:00:00.537) 0:07:48.028 *********** 2025-06-02 00:52:01.049427 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049432 | orchestrator | 2025-06-02 00:52:01.049437 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 00:52:01.049444 | orchestrator | Monday 02 June 2025 00:49:30 +0000 (0:00:00.211) 0:07:48.239 *********** 2025-06-02 00:52:01.049449 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049454 | orchestrator | 2025-06-02 00:52:01.049459 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 00:52:01.049463 | orchestrator | Monday 02 June 2025 00:49:30 +0000 (0:00:00.221) 0:07:48.461 *********** 2025-06-02 00:52:01.049468 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049473 | orchestrator | 2025-06-02 00:52:01.049478 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 00:52:01.049483 | orchestrator | Monday 02 June 2025 00:49:30 +0000 (0:00:00.124) 0:07:48.585 *********** 2025-06-02 00:52:01.049488 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049492 | orchestrator | 2025-06-02 00:52:01.049497 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 00:52:01.049502 | orchestrator | Monday 02 June 2025 00:49:30 +0000 (0:00:00.210) 0:07:48.795 *********** 2025-06-02 00:52:01.049507 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049512 | orchestrator | 2025-06-02 00:52:01.049519 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 00:52:01.049524 | orchestrator | Monday 02 June 2025 00:49:30 +0000 (0:00:00.215) 0:07:49.011 *********** 2025-06-02 00:52:01.049529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:52:01.049534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:52:01.049539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:52:01.049544 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049548 | orchestrator | 2025-06-02 00:52:01.049553 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 00:52:01.049558 | orchestrator | Monday 02 June 2025 00:49:31 +0000 (0:00:00.372) 0:07:49.383 *********** 2025-06-02 00:52:01.049566 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049571 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.049576 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.049581 | orchestrator | 2025-06-02 00:52:01.049586 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 00:52:01.049590 | orchestrator | Monday 02 June 2025 00:49:31 +0000 (0:00:00.301) 0:07:49.685 *********** 2025-06-02 00:52:01.049595 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049600 | orchestrator | 2025-06-02 00:52:01.049605 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 00:52:01.049610 | orchestrator | Monday 02 June 2025 00:49:32 +0000 (0:00:00.732) 0:07:50.418 *********** 2025-06-02 00:52:01.049615 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049620 | orchestrator | 2025-06-02 00:52:01.049625 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-02 00:52:01.049629 | orchestrator | 2025-06-02 00:52:01.049634 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 00:52:01.049639 | orchestrator | Monday 02 June 2025 00:49:32 +0000 (0:00:00.656) 0:07:51.074 *********** 2025-06-02 00:52:01.049644 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.049649 | orchestrator | 2025-06-02 00:52:01.049654 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 00:52:01.049659 | orchestrator | Monday 02 June 2025 00:49:34 +0000 (0:00:01.234) 0:07:52.309 *********** 2025-06-02 00:52:01.049664 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.049669 | orchestrator | 2025-06-02 00:52:01.049674 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 00:52:01.049678 | orchestrator | Monday 02 June 2025 00:49:35 +0000 (0:00:01.421) 0:07:53.731 *********** 2025-06-02 00:52:01.049683 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049688 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.049693 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.049698 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.049703 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.049708 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.049713 | orchestrator | 2025-06-02 00:52:01.049718 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 00:52:01.049723 | orchestrator | Monday 02 June 2025 00:49:36 +0000 (0:00:00.796) 0:07:54.527 *********** 2025-06-02 00:52:01.049727 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.049732 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.049737 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.049742 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.049747 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.049752 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.049757 | orchestrator | 2025-06-02 00:52:01.049762 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 00:52:01.049767 | orchestrator | Monday 02 June 2025 00:49:37 +0000 (0:00:00.937) 0:07:55.464 *********** 2025-06-02 00:52:01.049772 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.049777 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.049781 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.049786 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.049791 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.049796 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.049801 | orchestrator | 2025-06-02 00:52:01.049806 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 00:52:01.049811 | orchestrator | Monday 02 June 2025 00:49:38 +0000 (0:00:01.169) 0:07:56.634 *********** 2025-06-02 00:52:01.049819 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.049824 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.049829 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.049833 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.049838 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.049843 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.049848 | orchestrator | 2025-06-02 00:52:01.049853 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 00:52:01.049858 | orchestrator | Monday 02 June 2025 00:49:39 +0000 (0:00:00.949) 0:07:57.583 *********** 2025-06-02 00:52:01.049863 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049868 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.049873 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.049878 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.049883 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.049888 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.049893 | orchestrator | 2025-06-02 00:52:01.049898 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 00:52:01.049902 | orchestrator | Monday 02 June 2025 00:49:40 +0000 (0:00:00.829) 0:07:58.412 *********** 2025-06-02 00:52:01.049907 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.049912 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.049917 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.049922 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049927 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.049932 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.049947 | orchestrator | 2025-06-02 00:52:01.049955 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 00:52:01.049960 | orchestrator | Monday 02 June 2025 00:49:40 +0000 (0:00:00.574) 0:07:58.987 *********** 2025-06-02 00:52:01.049965 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.049970 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.049975 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.049980 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.049985 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.049989 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.049994 | orchestrator | 2025-06-02 00:52:01.049999 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 00:52:01.050004 | orchestrator | Monday 02 June 2025 00:49:41 +0000 (0:00:00.809) 0:07:59.797 *********** 2025-06-02 00:52:01.050009 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.050035 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.050041 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.050046 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.050051 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.050056 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.050061 | orchestrator | 2025-06-02 00:52:01.050066 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 00:52:01.050071 | orchestrator | Monday 02 June 2025 00:49:42 +0000 (0:00:01.057) 0:08:00.854 *********** 2025-06-02 00:52:01.050076 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.050081 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.050086 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.050122 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.050133 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.050138 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.050143 | orchestrator | 2025-06-02 00:52:01.050148 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 00:52:01.050153 | orchestrator | Monday 02 June 2025 00:49:43 +0000 (0:00:01.207) 0:08:02.061 *********** 2025-06-02 00:52:01.050157 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.050162 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.050167 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.050172 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.050180 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.050185 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.050190 | orchestrator | 2025-06-02 00:52:01.050195 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 00:52:01.050200 | orchestrator | Monday 02 June 2025 00:49:44 +0000 (0:00:00.571) 0:08:02.633 *********** 2025-06-02 00:52:01.050204 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.050209 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.050214 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.050219 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.050224 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.050229 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.050233 | orchestrator | 2025-06-02 00:52:01.050238 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 00:52:01.050243 | orchestrator | Monday 02 June 2025 00:49:45 +0000 (0:00:00.760) 0:08:03.394 *********** 2025-06-02 00:52:01.050248 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.050253 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.050258 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.050263 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.050267 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.050272 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.050277 | orchestrator | 2025-06-02 00:52:01.050282 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 00:52:01.050287 | orchestrator | Monday 02 June 2025 00:49:45 +0000 (0:00:00.623) 0:08:04.017 *********** 2025-06-02 00:52:01.050292 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.050297 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.050301 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.050306 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.050311 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.050316 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.050321 | orchestrator | 2025-06-02 00:52:01.050326 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 00:52:01.050331 | orchestrator | Monday 02 June 2025 00:49:46 +0000 (0:00:00.771) 0:08:04.788 *********** 2025-06-02 00:52:01.050336 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.050341 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.050345 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.050350 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.050355 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.050360 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.050365 | orchestrator | 2025-06-02 00:52:01.050370 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 00:52:01.050375 | orchestrator | Monday 02 June 2025 00:49:47 +0000 (0:00:00.603) 0:08:05.392 *********** 2025-06-02 00:52:01.050379 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.050384 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.050389 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.050394 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.050399 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.050404 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.050408 | orchestrator | 2025-06-02 00:52:01.050415 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 00:52:01.050420 | orchestrator | Monday 02 June 2025 00:49:48 +0000 (0:00:00.732) 0:08:06.124 *********** 2025-06-02 00:52:01.050425 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:52:01.050430 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:52:01.050435 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:52:01.050440 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.050444 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.050449 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.050454 | orchestrator | 2025-06-02 00:52:01.050459 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 00:52:01.050466 | orchestrator | Monday 02 June 2025 00:49:48 +0000 (0:00:00.532) 0:08:06.657 *********** 2025-06-02 00:52:01.050471 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.050476 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.050481 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.050486 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.050491 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.050496 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.050501 | orchestrator | 2025-06-02 00:52:01.050510 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 00:52:01.050515 | orchestrator | Monday 02 June 2025 00:49:49 +0000 (0:00:00.753) 0:08:07.410 *********** 2025-06-02 00:52:01.050520 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.050525 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.050530 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.050535 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.050539 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.050544 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.050549 | orchestrator | 2025-06-02 00:52:01.050554 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 00:52:01.050559 | orchestrator | Monday 02 June 2025 00:49:49 +0000 (0:00:00.573) 0:08:07.983 *********** 2025-06-02 00:52:01.050564 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.050569 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.050574 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.050578 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.050583 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.050588 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.050593 | orchestrator | 2025-06-02 00:52:01.050598 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-02 00:52:01.050603 | orchestrator | Monday 02 June 2025 00:49:50 +0000 (0:00:01.110) 0:08:09.094 *********** 2025-06-02 00:52:01.050607 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.050612 | orchestrator | 2025-06-02 00:52:01.050617 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-02 00:52:01.050622 | orchestrator | Monday 02 June 2025 00:49:54 +0000 (0:00:03.740) 0:08:12.834 *********** 2025-06-02 00:52:01.050627 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.050632 | orchestrator | 2025-06-02 00:52:01.050637 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-02 00:52:01.050642 | orchestrator | Monday 02 June 2025 00:49:56 +0000 (0:00:02.020) 0:08:14.855 *********** 2025-06-02 00:52:01.050647 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.050651 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.050656 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.050661 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.050666 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.050671 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.050676 | orchestrator | 2025-06-02 00:52:01.050681 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-02 00:52:01.050686 | orchestrator | Monday 02 June 2025 00:49:58 +0000 (0:00:01.623) 0:08:16.478 *********** 2025-06-02 00:52:01.050690 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.050695 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.050700 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.050705 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.050710 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.050715 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.050719 | orchestrator | 2025-06-02 00:52:01.050724 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-02 00:52:01.050729 | orchestrator | Monday 02 June 2025 00:49:59 +0000 (0:00:00.881) 0:08:17.360 *********** 2025-06-02 00:52:01.050734 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.050742 | orchestrator | 2025-06-02 00:52:01.050747 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-02 00:52:01.050752 | orchestrator | Monday 02 June 2025 00:50:00 +0000 (0:00:01.140) 0:08:18.500 *********** 2025-06-02 00:52:01.050757 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.050762 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.050767 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.050772 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.050777 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.050781 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.050786 | orchestrator | 2025-06-02 00:52:01.050791 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-02 00:52:01.050796 | orchestrator | Monday 02 June 2025 00:50:01 +0000 (0:00:01.605) 0:08:20.106 *********** 2025-06-02 00:52:01.050801 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.050806 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.050811 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.050816 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.050821 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.050825 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.050830 | orchestrator | 2025-06-02 00:52:01.050835 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-02 00:52:01.050840 | orchestrator | Monday 02 June 2025 00:50:05 +0000 (0:00:03.146) 0:08:23.252 *********** 2025-06-02 00:52:01.050845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.050850 | orchestrator | 2025-06-02 00:52:01.050857 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-02 00:52:01.050862 | orchestrator | Monday 02 June 2025 00:50:06 +0000 (0:00:01.110) 0:08:24.362 *********** 2025-06-02 00:52:01.050867 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.050872 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.050876 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.050881 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.050886 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.050891 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.050896 | orchestrator | 2025-06-02 00:52:01.050901 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-02 00:52:01.050905 | orchestrator | Monday 02 June 2025 00:50:06 +0000 (0:00:00.526) 0:08:24.889 *********** 2025-06-02 00:52:01.050910 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:52:01.050915 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:52:01.050920 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:52:01.050925 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.050930 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.050935 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.050950 | orchestrator | 2025-06-02 00:52:01.050955 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-02 00:52:01.050963 | orchestrator | Monday 02 June 2025 00:50:08 +0000 (0:00:02.034) 0:08:26.923 *********** 2025-06-02 00:52:01.050968 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:52:01.050973 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:52:01.050978 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:52:01.050983 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.050988 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.050993 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.050998 | orchestrator | 2025-06-02 00:52:01.051003 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-02 00:52:01.051008 | orchestrator | 2025-06-02 00:52:01.051013 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 00:52:01.051017 | orchestrator | Monday 02 June 2025 00:50:09 +0000 (0:00:00.882) 0:08:27.806 *********** 2025-06-02 00:52:01.051022 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.051030 | orchestrator | 2025-06-02 00:52:01.051035 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 00:52:01.051040 | orchestrator | Monday 02 June 2025 00:50:10 +0000 (0:00:00.430) 0:08:28.236 *********** 2025-06-02 00:52:01.051045 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.051050 | orchestrator | 2025-06-02 00:52:01.051054 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 00:52:01.051059 | orchestrator | Monday 02 June 2025 00:50:10 +0000 (0:00:00.610) 0:08:28.847 *********** 2025-06-02 00:52:01.051064 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.051069 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.051074 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.051079 | orchestrator | 2025-06-02 00:52:01.051084 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 00:52:01.051089 | orchestrator | Monday 02 June 2025 00:50:10 +0000 (0:00:00.239) 0:08:29.086 *********** 2025-06-02 00:52:01.051094 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.051099 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.051104 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.051108 | orchestrator | 2025-06-02 00:52:01.051113 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 00:52:01.051118 | orchestrator | Monday 02 June 2025 00:50:11 +0000 (0:00:00.619) 0:08:29.705 *********** 2025-06-02 00:52:01.051123 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.051128 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.051133 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.051138 | orchestrator | 2025-06-02 00:52:01.051143 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 00:52:01.051148 | orchestrator | Monday 02 June 2025 00:50:12 +0000 (0:00:00.722) 0:08:30.427 *********** 2025-06-02 00:52:01.051153 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.051157 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.051162 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.051167 | orchestrator | 2025-06-02 00:52:01.051172 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 00:52:01.051177 | orchestrator | Monday 02 June 2025 00:50:12 +0000 (0:00:00.630) 0:08:31.058 *********** 2025-06-02 00:52:01.051182 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.051187 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.051192 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.051197 | orchestrator | 2025-06-02 00:52:01.051202 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 00:52:01.051207 | orchestrator | Monday 02 June 2025 00:50:13 +0000 (0:00:00.238) 0:08:31.296 *********** 2025-06-02 00:52:01.051212 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.051217 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.051222 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.051226 | orchestrator | 2025-06-02 00:52:01.051231 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 00:52:01.051236 | orchestrator | Monday 02 June 2025 00:50:13 +0000 (0:00:00.249) 0:08:31.546 *********** 2025-06-02 00:52:01.051241 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.051246 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.051251 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.051256 | orchestrator | 2025-06-02 00:52:01.051261 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 00:52:01.051266 | orchestrator | Monday 02 June 2025 00:50:13 +0000 (0:00:00.421) 0:08:31.967 *********** 2025-06-02 00:52:01.051270 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.051275 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.051280 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.051285 | orchestrator | 2025-06-02 00:52:01.051290 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 00:52:01.051298 | orchestrator | Monday 02 June 2025 00:50:14 +0000 (0:00:00.613) 0:08:32.581 *********** 2025-06-02 00:52:01.051303 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.051310 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.051315 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.051320 | orchestrator | 2025-06-02 00:52:01.051324 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 00:52:01.051329 | orchestrator | Monday 02 June 2025 00:50:15 +0000 (0:00:00.761) 0:08:33.343 *********** 2025-06-02 00:52:01.051334 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.051339 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.051344 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.051349 | orchestrator | 2025-06-02 00:52:01.051354 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 00:52:01.051359 | orchestrator | Monday 02 June 2025 00:50:15 +0000 (0:00:00.343) 0:08:33.686 *********** 2025-06-02 00:52:01.051364 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.051369 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.051374 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.051379 | orchestrator | 2025-06-02 00:52:01.051384 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 00:52:01.051388 | orchestrator | Monday 02 June 2025 00:50:15 +0000 (0:00:00.399) 0:08:34.086 *********** 2025-06-02 00:52:01.051396 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.051401 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.051406 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.051411 | orchestrator | 2025-06-02 00:52:01.051416 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 00:52:01.051421 | orchestrator | Monday 02 June 2025 00:50:16 +0000 (0:00:00.293) 0:08:34.379 *********** 2025-06-02 00:52:01.051425 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.051430 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.051435 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.051440 | orchestrator | 2025-06-02 00:52:01.051445 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 00:52:01.051450 | orchestrator | Monday 02 June 2025 00:50:16 +0000 (0:00:00.239) 0:08:34.619 *********** 2025-06-02 00:52:01.051455 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.051460 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.051465 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.051470 | orchestrator | 2025-06-02 00:52:01.051474 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 00:52:01.051479 | orchestrator | Monday 02 June 2025 00:50:16 +0000 (0:00:00.260) 0:08:34.880 *********** 2025-06-02 00:52:01.051484 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.051489 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.051494 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.051499 | orchestrator | 2025-06-02 00:52:01.051504 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 00:52:01.051509 | orchestrator | Monday 02 June 2025 00:50:17 +0000 (0:00:00.363) 0:08:35.243 *********** 2025-06-02 00:52:01.051514 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.051519 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.051524 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.051528 | orchestrator | 2025-06-02 00:52:01.051533 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 00:52:01.051538 | orchestrator | Monday 02 June 2025 00:50:17 +0000 (0:00:00.216) 0:08:35.459 *********** 2025-06-02 00:52:01.051543 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.051548 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.051553 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.051558 | orchestrator | 2025-06-02 00:52:01.051563 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 00:52:01.051568 | orchestrator | Monday 02 June 2025 00:50:17 +0000 (0:00:00.211) 0:08:35.670 *********** 2025-06-02 00:52:01.051575 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.051580 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.051585 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.051590 | orchestrator | 2025-06-02 00:52:01.051595 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 00:52:01.051600 | orchestrator | Monday 02 June 2025 00:50:17 +0000 (0:00:00.240) 0:08:35.911 *********** 2025-06-02 00:52:01.051605 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.051610 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.051615 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.051620 | orchestrator | 2025-06-02 00:52:01.051625 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-02 00:52:01.051630 | orchestrator | Monday 02 June 2025 00:50:18 +0000 (0:00:00.602) 0:08:36.514 *********** 2025-06-02 00:52:01.051635 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.051640 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.051645 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-02 00:52:01.051649 | orchestrator | 2025-06-02 00:52:01.051654 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-02 00:52:01.051659 | orchestrator | Monday 02 June 2025 00:50:18 +0000 (0:00:00.322) 0:08:36.836 *********** 2025-06-02 00:52:01.051664 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 00:52:01.051669 | orchestrator | 2025-06-02 00:52:01.051674 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-02 00:52:01.051679 | orchestrator | Monday 02 June 2025 00:50:20 +0000 (0:00:02.073) 0:08:38.909 *********** 2025-06-02 00:52:01.051684 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-02 00:52:01.051689 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.051697 | orchestrator | 2025-06-02 00:52:01.051706 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-02 00:52:01.051711 | orchestrator | Monday 02 June 2025 00:50:20 +0000 (0:00:00.178) 0:08:39.087 *********** 2025-06-02 00:52:01.051719 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 00:52:01.051727 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 00:52:01.051732 | orchestrator | 2025-06-02 00:52:01.051737 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-02 00:52:01.051742 | orchestrator | Monday 02 June 2025 00:50:28 +0000 (0:00:08.027) 0:08:47.115 *********** 2025-06-02 00:52:01.051747 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 00:52:01.051752 | orchestrator | 2025-06-02 00:52:01.051757 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-02 00:52:01.051762 | orchestrator | Monday 02 June 2025 00:50:32 +0000 (0:00:03.463) 0:08:50.579 *********** 2025-06-02 00:52:01.051769 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.051774 | orchestrator | 2025-06-02 00:52:01.051780 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-02 00:52:01.051784 | orchestrator | Monday 02 June 2025 00:50:32 +0000 (0:00:00.511) 0:08:51.091 *********** 2025-06-02 00:52:01.051789 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 00:52:01.051797 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 00:52:01.051802 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 00:52:01.051807 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-02 00:52:01.051812 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-02 00:52:01.051817 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-02 00:52:01.051822 | orchestrator | 2025-06-02 00:52:01.051827 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-02 00:52:01.051832 | orchestrator | Monday 02 June 2025 00:50:33 +0000 (0:00:00.976) 0:08:52.067 *********** 2025-06-02 00:52:01.051837 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.051842 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 00:52:01.051847 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 00:52:01.051852 | orchestrator | 2025-06-02 00:52:01.051857 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-02 00:52:01.051861 | orchestrator | Monday 02 June 2025 00:50:36 +0000 (0:00:02.450) 0:08:54.517 *********** 2025-06-02 00:52:01.051866 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 00:52:01.051871 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 00:52:01.051877 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.051882 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 00:52:01.051887 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 00:52:01.051892 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.051897 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 00:52:01.051901 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 00:52:01.051907 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.051911 | orchestrator | 2025-06-02 00:52:01.051916 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-02 00:52:01.051921 | orchestrator | Monday 02 June 2025 00:50:37 +0000 (0:00:01.394) 0:08:55.912 *********** 2025-06-02 00:52:01.051926 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.051931 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.051945 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.051950 | orchestrator | 2025-06-02 00:52:01.051955 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-02 00:52:01.051960 | orchestrator | Monday 02 June 2025 00:50:40 +0000 (0:00:02.550) 0:08:58.462 *********** 2025-06-02 00:52:01.051965 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.051970 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.051975 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.051979 | orchestrator | 2025-06-02 00:52:01.051984 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-02 00:52:01.051989 | orchestrator | Monday 02 June 2025 00:50:40 +0000 (0:00:00.291) 0:08:58.753 *********** 2025-06-02 00:52:01.051994 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.051999 | orchestrator | 2025-06-02 00:52:01.052004 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-02 00:52:01.052009 | orchestrator | Monday 02 June 2025 00:50:41 +0000 (0:00:00.708) 0:08:59.462 *********** 2025-06-02 00:52:01.052014 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.052019 | orchestrator | 2025-06-02 00:52:01.052023 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-02 00:52:01.052028 | orchestrator | Monday 02 June 2025 00:50:41 +0000 (0:00:00.479) 0:08:59.942 *********** 2025-06-02 00:52:01.052033 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.052038 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.052046 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.052051 | orchestrator | 2025-06-02 00:52:01.052056 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-02 00:52:01.052061 | orchestrator | Monday 02 June 2025 00:50:43 +0000 (0:00:01.221) 0:09:01.164 *********** 2025-06-02 00:52:01.052065 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.052070 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.052078 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.052083 | orchestrator | 2025-06-02 00:52:01.052088 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-02 00:52:01.052093 | orchestrator | Monday 02 June 2025 00:50:44 +0000 (0:00:01.332) 0:09:02.497 *********** 2025-06-02 00:52:01.052098 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.052103 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.052108 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.052112 | orchestrator | 2025-06-02 00:52:01.052118 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-02 00:52:01.052123 | orchestrator | Monday 02 June 2025 00:50:46 +0000 (0:00:01.725) 0:09:04.223 *********** 2025-06-02 00:52:01.052128 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.052132 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.052137 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.052144 | orchestrator | 2025-06-02 00:52:01.052152 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-02 00:52:01.052157 | orchestrator | Monday 02 June 2025 00:50:48 +0000 (0:00:01.941) 0:09:06.164 *********** 2025-06-02 00:52:01.052162 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.052169 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.052175 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.052180 | orchestrator | 2025-06-02 00:52:01.052184 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 00:52:01.052189 | orchestrator | Monday 02 June 2025 00:50:49 +0000 (0:00:01.373) 0:09:07.538 *********** 2025-06-02 00:52:01.052194 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.052199 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.052204 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.052209 | orchestrator | 2025-06-02 00:52:01.052214 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 00:52:01.052219 | orchestrator | Monday 02 June 2025 00:50:50 +0000 (0:00:00.599) 0:09:08.137 *********** 2025-06-02 00:52:01.052224 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.052229 | orchestrator | 2025-06-02 00:52:01.052234 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 00:52:01.052238 | orchestrator | Monday 02 June 2025 00:50:50 +0000 (0:00:00.704) 0:09:08.841 *********** 2025-06-02 00:52:01.052243 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.052248 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.052253 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.052258 | orchestrator | 2025-06-02 00:52:01.052263 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 00:52:01.052268 | orchestrator | Monday 02 June 2025 00:50:51 +0000 (0:00:00.326) 0:09:09.168 *********** 2025-06-02 00:52:01.052273 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.052278 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.052283 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.052288 | orchestrator | 2025-06-02 00:52:01.052292 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 00:52:01.052297 | orchestrator | Monday 02 June 2025 00:50:52 +0000 (0:00:01.134) 0:09:10.302 *********** 2025-06-02 00:52:01.052302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:52:01.052307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:52:01.052312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:52:01.052322 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.052327 | orchestrator | 2025-06-02 00:52:01.052331 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 00:52:01.052336 | orchestrator | Monday 02 June 2025 00:50:52 +0000 (0:00:00.817) 0:09:11.119 *********** 2025-06-02 00:52:01.052341 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.052346 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.052351 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.052356 | orchestrator | 2025-06-02 00:52:01.052361 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 00:52:01.052366 | orchestrator | 2025-06-02 00:52:01.052375 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 00:52:01.052380 | orchestrator | Monday 02 June 2025 00:50:53 +0000 (0:00:00.743) 0:09:11.862 *********** 2025-06-02 00:52:01.052385 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.052390 | orchestrator | 2025-06-02 00:52:01.052395 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 00:52:01.052400 | orchestrator | Monday 02 June 2025 00:50:54 +0000 (0:00:00.474) 0:09:12.336 *********** 2025-06-02 00:52:01.052404 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.052409 | orchestrator | 2025-06-02 00:52:01.052414 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 00:52:01.052419 | orchestrator | Monday 02 June 2025 00:50:54 +0000 (0:00:00.699) 0:09:13.036 *********** 2025-06-02 00:52:01.052424 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.052429 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.052433 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.052438 | orchestrator | 2025-06-02 00:52:01.052443 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 00:52:01.052448 | orchestrator | Monday 02 June 2025 00:50:55 +0000 (0:00:00.304) 0:09:13.341 *********** 2025-06-02 00:52:01.052453 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.052458 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.052463 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.052467 | orchestrator | 2025-06-02 00:52:01.052472 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 00:52:01.052477 | orchestrator | Monday 02 June 2025 00:50:55 +0000 (0:00:00.675) 0:09:14.016 *********** 2025-06-02 00:52:01.052482 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.052487 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.052492 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.052497 | orchestrator | 2025-06-02 00:52:01.052504 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 00:52:01.052508 | orchestrator | Monday 02 June 2025 00:50:56 +0000 (0:00:00.707) 0:09:14.724 *********** 2025-06-02 00:52:01.052513 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.052518 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.052523 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.052528 | orchestrator | 2025-06-02 00:52:01.052533 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 00:52:01.052537 | orchestrator | Monday 02 June 2025 00:50:57 +0000 (0:00:00.946) 0:09:15.670 *********** 2025-06-02 00:52:01.052542 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.052547 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.052552 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.052557 | orchestrator | 2025-06-02 00:52:01.052561 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 00:52:01.052566 | orchestrator | Monday 02 June 2025 00:50:57 +0000 (0:00:00.317) 0:09:15.988 *********** 2025-06-02 00:52:01.052571 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.052576 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.052581 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.052590 | orchestrator | 2025-06-02 00:52:01.052598 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 00:52:01.052603 | orchestrator | Monday 02 June 2025 00:50:58 +0000 (0:00:00.269) 0:09:16.257 *********** 2025-06-02 00:52:01.052608 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.052613 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.052617 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.052622 | orchestrator | 2025-06-02 00:52:01.052627 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 00:52:01.052632 | orchestrator | Monday 02 June 2025 00:50:58 +0000 (0:00:00.276) 0:09:16.534 *********** 2025-06-02 00:52:01.052637 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.052642 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.052679 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.052685 | orchestrator | 2025-06-02 00:52:01.052690 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 00:52:01.052695 | orchestrator | Monday 02 June 2025 00:50:59 +0000 (0:00:00.945) 0:09:17.479 *********** 2025-06-02 00:52:01.052700 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.052705 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.052710 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.052715 | orchestrator | 2025-06-02 00:52:01.052719 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 00:52:01.052724 | orchestrator | Monday 02 June 2025 00:51:00 +0000 (0:00:00.688) 0:09:18.167 *********** 2025-06-02 00:52:01.052729 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.052734 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.052739 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.052744 | orchestrator | 2025-06-02 00:52:01.052749 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 00:52:01.052754 | orchestrator | Monday 02 June 2025 00:51:00 +0000 (0:00:00.266) 0:09:18.434 *********** 2025-06-02 00:52:01.052758 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.052763 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.052768 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.052773 | orchestrator | 2025-06-02 00:52:01.052778 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 00:52:01.052783 | orchestrator | Monday 02 June 2025 00:51:00 +0000 (0:00:00.274) 0:09:18.708 *********** 2025-06-02 00:52:01.052788 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.052793 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.052798 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.052803 | orchestrator | 2025-06-02 00:52:01.052807 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 00:52:01.052812 | orchestrator | Monday 02 June 2025 00:51:01 +0000 (0:00:00.560) 0:09:19.269 *********** 2025-06-02 00:52:01.052817 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.052822 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.052827 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.052832 | orchestrator | 2025-06-02 00:52:01.052837 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 00:52:01.052842 | orchestrator | Monday 02 June 2025 00:51:01 +0000 (0:00:00.300) 0:09:19.570 *********** 2025-06-02 00:52:01.052846 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.052851 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.052856 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.052861 | orchestrator | 2025-06-02 00:52:01.052866 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 00:52:01.052871 | orchestrator | Monday 02 June 2025 00:51:01 +0000 (0:00:00.285) 0:09:19.855 *********** 2025-06-02 00:52:01.052876 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.052881 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.052885 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.052890 | orchestrator | 2025-06-02 00:52:01.052895 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 00:52:01.052903 | orchestrator | Monday 02 June 2025 00:51:02 +0000 (0:00:00.288) 0:09:20.144 *********** 2025-06-02 00:52:01.052908 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.052913 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.052918 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.052923 | orchestrator | 2025-06-02 00:52:01.052928 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 00:52:01.052933 | orchestrator | Monday 02 June 2025 00:51:02 +0000 (0:00:00.512) 0:09:20.657 *********** 2025-06-02 00:52:01.052961 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.052966 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.052971 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.052976 | orchestrator | 2025-06-02 00:52:01.052981 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 00:52:01.052986 | orchestrator | Monday 02 June 2025 00:51:02 +0000 (0:00:00.293) 0:09:20.950 *********** 2025-06-02 00:52:01.052991 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.052996 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.053001 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.053005 | orchestrator | 2025-06-02 00:52:01.053010 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 00:52:01.053018 | orchestrator | Monday 02 June 2025 00:51:03 +0000 (0:00:00.328) 0:09:21.278 *********** 2025-06-02 00:52:01.053023 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.053027 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.053032 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.053037 | orchestrator | 2025-06-02 00:52:01.053041 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-02 00:52:01.053046 | orchestrator | Monday 02 June 2025 00:51:03 +0000 (0:00:00.733) 0:09:22.012 *********** 2025-06-02 00:52:01.053050 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.053055 | orchestrator | 2025-06-02 00:52:01.053060 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 00:52:01.053064 | orchestrator | Monday 02 June 2025 00:51:04 +0000 (0:00:00.501) 0:09:22.513 *********** 2025-06-02 00:52:01.053069 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.053074 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 00:52:01.053078 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 00:52:01.053083 | orchestrator | 2025-06-02 00:52:01.053090 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 00:52:01.053095 | orchestrator | Monday 02 June 2025 00:51:06 +0000 (0:00:01.986) 0:09:24.500 *********** 2025-06-02 00:52:01.053100 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 00:52:01.053104 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 00:52:01.053109 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.053114 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 00:52:01.053119 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 00:52:01.053123 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.053128 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 00:52:01.053133 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 00:52:01.053137 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.053142 | orchestrator | 2025-06-02 00:52:01.053147 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-02 00:52:01.053151 | orchestrator | Monday 02 June 2025 00:51:07 +0000 (0:00:01.367) 0:09:25.868 *********** 2025-06-02 00:52:01.053156 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.053161 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.053165 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.053170 | orchestrator | 2025-06-02 00:52:01.053175 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-02 00:52:01.053182 | orchestrator | Monday 02 June 2025 00:51:08 +0000 (0:00:00.322) 0:09:26.191 *********** 2025-06-02 00:52:01.053187 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.053192 | orchestrator | 2025-06-02 00:52:01.053196 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-02 00:52:01.053201 | orchestrator | Monday 02 June 2025 00:51:08 +0000 (0:00:00.500) 0:09:26.692 *********** 2025-06-02 00:52:01.053205 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.053210 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.053215 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.053220 | orchestrator | 2025-06-02 00:52:01.053224 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-02 00:52:01.053229 | orchestrator | Monday 02 June 2025 00:51:09 +0000 (0:00:01.220) 0:09:27.912 *********** 2025-06-02 00:52:01.053234 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.053238 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 00:52:01.053243 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.053247 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.053252 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 00:52:01.053257 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 00:52:01.053261 | orchestrator | 2025-06-02 00:52:01.053266 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 00:52:01.053271 | orchestrator | Monday 02 June 2025 00:51:14 +0000 (0:00:04.208) 0:09:32.121 *********** 2025-06-02 00:52:01.053275 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.053280 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 00:52:01.053284 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.053289 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 00:52:01.053294 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:52:01.053298 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 00:52:01.053303 | orchestrator | 2025-06-02 00:52:01.053310 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 00:52:01.053314 | orchestrator | Monday 02 June 2025 00:51:16 +0000 (0:00:02.250) 0:09:34.371 *********** 2025-06-02 00:52:01.053319 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 00:52:01.053323 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.053328 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 00:52:01.053333 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.053337 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 00:52:01.053342 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.053347 | orchestrator | 2025-06-02 00:52:01.053351 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-02 00:52:01.053356 | orchestrator | Monday 02 June 2025 00:51:17 +0000 (0:00:01.152) 0:09:35.523 *********** 2025-06-02 00:52:01.053361 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-02 00:52:01.053368 | orchestrator | 2025-06-02 00:52:01.053372 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-02 00:52:01.053379 | orchestrator | Monday 02 June 2025 00:51:17 +0000 (0:00:00.226) 0:09:35.750 *********** 2025-06-02 00:52:01.053384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 00:52:01.053389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 00:52:01.053394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 00:52:01.053398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 00:52:01.053403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 00:52:01.053408 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.053412 | orchestrator | 2025-06-02 00:52:01.053417 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-02 00:52:01.053422 | orchestrator | Monday 02 June 2025 00:51:18 +0000 (0:00:01.173) 0:09:36.923 *********** 2025-06-02 00:52:01.053426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 00:52:01.053431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 00:52:01.053436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 00:52:01.053440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 00:52:01.053445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 00:52:01.053450 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.053454 | orchestrator | 2025-06-02 00:52:01.053459 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-02 00:52:01.053464 | orchestrator | Monday 02 June 2025 00:51:19 +0000 (0:00:00.574) 0:09:37.498 *********** 2025-06-02 00:52:01.053468 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 00:52:01.053473 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 00:52:01.053478 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 00:52:01.053483 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 00:52:01.053487 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 00:52:01.053492 | orchestrator | 2025-06-02 00:52:01.053496 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-02 00:52:01.053501 | orchestrator | Monday 02 June 2025 00:51:48 +0000 (0:00:28.716) 0:10:06.214 *********** 2025-06-02 00:52:01.053506 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.053510 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.053515 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.053520 | orchestrator | 2025-06-02 00:52:01.053528 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-02 00:52:01.053532 | orchestrator | Monday 02 June 2025 00:51:48 +0000 (0:00:00.278) 0:10:06.493 *********** 2025-06-02 00:52:01.053537 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.053542 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.053546 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.053551 | orchestrator | 2025-06-02 00:52:01.053556 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-02 00:52:01.053562 | orchestrator | Monday 02 June 2025 00:51:48 +0000 (0:00:00.306) 0:10:06.800 *********** 2025-06-02 00:52:01.053567 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.053572 | orchestrator | 2025-06-02 00:52:01.053577 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-02 00:52:01.053581 | orchestrator | Monday 02 June 2025 00:51:49 +0000 (0:00:00.704) 0:10:07.505 *********** 2025-06-02 00:52:01.053586 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.053591 | orchestrator | 2025-06-02 00:52:01.053595 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-02 00:52:01.053600 | orchestrator | Monday 02 June 2025 00:51:49 +0000 (0:00:00.522) 0:10:08.027 *********** 2025-06-02 00:52:01.053604 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.053609 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.053614 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.053619 | orchestrator | 2025-06-02 00:52:01.053625 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-02 00:52:01.053630 | orchestrator | Monday 02 June 2025 00:51:51 +0000 (0:00:01.187) 0:10:09.214 *********** 2025-06-02 00:52:01.053635 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.053640 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.053644 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.053649 | orchestrator | 2025-06-02 00:52:01.053654 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-02 00:52:01.053658 | orchestrator | Monday 02 June 2025 00:51:52 +0000 (0:00:01.345) 0:10:10.560 *********** 2025-06-02 00:52:01.053663 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:52:01.053667 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:52:01.053672 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:52:01.053677 | orchestrator | 2025-06-02 00:52:01.053681 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-02 00:52:01.053686 | orchestrator | Monday 02 June 2025 00:51:54 +0000 (0:00:01.763) 0:10:12.324 *********** 2025-06-02 00:52:01.053691 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.053695 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.053700 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 00:52:01.053705 | orchestrator | 2025-06-02 00:52:01.053710 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 00:52:01.053714 | orchestrator | Monday 02 June 2025 00:51:56 +0000 (0:00:02.523) 0:10:14.847 *********** 2025-06-02 00:52:01.053719 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.053723 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.053728 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.053733 | orchestrator | 2025-06-02 00:52:01.053737 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 00:52:01.053742 | orchestrator | Monday 02 June 2025 00:51:57 +0000 (0:00:00.347) 0:10:15.195 *********** 2025-06-02 00:52:01.053747 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:52:01.053754 | orchestrator | 2025-06-02 00:52:01.053759 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 00:52:01.053764 | orchestrator | Monday 02 June 2025 00:51:57 +0000 (0:00:00.481) 0:10:15.677 *********** 2025-06-02 00:52:01.053768 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.053773 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.053778 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.053782 | orchestrator | 2025-06-02 00:52:01.053787 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 00:52:01.053792 | orchestrator | Monday 02 June 2025 00:51:58 +0000 (0:00:00.530) 0:10:16.207 *********** 2025-06-02 00:52:01.053796 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.053801 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:52:01.053806 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:52:01.053810 | orchestrator | 2025-06-02 00:52:01.053815 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 00:52:01.053820 | orchestrator | Monday 02 June 2025 00:51:58 +0000 (0:00:00.326) 0:10:16.534 *********** 2025-06-02 00:52:01.053824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:52:01.053829 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:52:01.053834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:52:01.053838 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:52:01.053843 | orchestrator | 2025-06-02 00:52:01.053848 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 00:52:01.053852 | orchestrator | Monday 02 June 2025 00:51:58 +0000 (0:00:00.570) 0:10:17.105 *********** 2025-06-02 00:52:01.053857 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:52:01.053862 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:52:01.053867 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:52:01.053871 | orchestrator | 2025-06-02 00:52:01.053876 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:52:01.053881 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-06-02 00:52:01.053886 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-02 00:52:01.053893 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-02 00:52:01.053897 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-06-02 00:52:01.053902 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-02 00:52:01.053907 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-02 00:52:01.053911 | orchestrator | 2025-06-02 00:52:01.053916 | orchestrator | 2025-06-02 00:52:01.053921 | orchestrator | 2025-06-02 00:52:01.053926 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:52:01.053930 | orchestrator | Monday 02 June 2025 00:51:59 +0000 (0:00:00.219) 0:10:17.325 *********** 2025-06-02 00:52:01.053946 | orchestrator | =============================================================================== 2025-06-02 00:52:01.053951 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 57.01s 2025-06-02 00:52:01.053955 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.35s 2025-06-02 00:52:01.053960 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.03s 2025-06-02 00:52:01.053965 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 28.72s 2025-06-02 00:52:01.053972 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.79s 2025-06-02 00:52:01.053977 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.45s 2025-06-02 00:52:01.053981 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.78s 2025-06-02 00:52:01.053986 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.20s 2025-06-02 00:52:01.053991 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.97s 2025-06-02 00:52:01.053995 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.03s 2025-06-02 00:52:01.054000 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.41s 2025-06-02 00:52:01.054005 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.37s 2025-06-02 00:52:01.054009 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.63s 2025-06-02 00:52:01.054035 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.21s 2025-06-02 00:52:01.054041 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.74s 2025-06-02 00:52:01.054045 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.46s 2025-06-02 00:52:01.054050 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.38s 2025-06-02 00:52:01.054055 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.15s 2025-06-02 00:52:01.054059 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.08s 2025-06-02 00:52:01.054064 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.07s 2025-06-02 00:52:01.054069 | orchestrator | 2025-06-02 00:52:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:04.078264 | orchestrator | 2025-06-02 00:52:04 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:04.080089 | orchestrator | 2025-06-02 00:52:04 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:04.080561 | orchestrator | 2025-06-02 00:52:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:07.120898 | orchestrator | 2025-06-02 00:52:07 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:07.123678 | orchestrator | 2025-06-02 00:52:07 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:07.124003 | orchestrator | 2025-06-02 00:52:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:10.161080 | orchestrator | 2025-06-02 00:52:10 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:10.162702 | orchestrator | 2025-06-02 00:52:10 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:10.162738 | orchestrator | 2025-06-02 00:52:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:13.210475 | orchestrator | 2025-06-02 00:52:13 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:13.211682 | orchestrator | 2025-06-02 00:52:13 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:13.211711 | orchestrator | 2025-06-02 00:52:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:16.268479 | orchestrator | 2025-06-02 00:52:16 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:16.268585 | orchestrator | 2025-06-02 00:52:16 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:16.269036 | orchestrator | 2025-06-02 00:52:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:19.324471 | orchestrator | 2025-06-02 00:52:19 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:19.325958 | orchestrator | 2025-06-02 00:52:19 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:19.326335 | orchestrator | 2025-06-02 00:52:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:22.371091 | orchestrator | 2025-06-02 00:52:22 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:22.372690 | orchestrator | 2025-06-02 00:52:22 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:22.373617 | orchestrator | 2025-06-02 00:52:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:25.432133 | orchestrator | 2025-06-02 00:52:25 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:25.433387 | orchestrator | 2025-06-02 00:52:25 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:25.433417 | orchestrator | 2025-06-02 00:52:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:28.493514 | orchestrator | 2025-06-02 00:52:28 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:28.493610 | orchestrator | 2025-06-02 00:52:28 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:28.493626 | orchestrator | 2025-06-02 00:52:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:31.556781 | orchestrator | 2025-06-02 00:52:31 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:31.558626 | orchestrator | 2025-06-02 00:52:31 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:31.558665 | orchestrator | 2025-06-02 00:52:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:34.600792 | orchestrator | 2025-06-02 00:52:34 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:34.601816 | orchestrator | 2025-06-02 00:52:34 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:34.602145 | orchestrator | 2025-06-02 00:52:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:37.653490 | orchestrator | 2025-06-02 00:52:37 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:37.653585 | orchestrator | 2025-06-02 00:52:37 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:37.653606 | orchestrator | 2025-06-02 00:52:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:40.707156 | orchestrator | 2025-06-02 00:52:40 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:40.708796 | orchestrator | 2025-06-02 00:52:40 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:40.708828 | orchestrator | 2025-06-02 00:52:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:43.752714 | orchestrator | 2025-06-02 00:52:43 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:43.754765 | orchestrator | 2025-06-02 00:52:43 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:43.754799 | orchestrator | 2025-06-02 00:52:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:46.797910 | orchestrator | 2025-06-02 00:52:46 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:46.798235 | orchestrator | 2025-06-02 00:52:46 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:46.798262 | orchestrator | 2025-06-02 00:52:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:49.837835 | orchestrator | 2025-06-02 00:52:49 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:49.839396 | orchestrator | 2025-06-02 00:52:49 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:49.839763 | orchestrator | 2025-06-02 00:52:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:52.891114 | orchestrator | 2025-06-02 00:52:52 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:52.891538 | orchestrator | 2025-06-02 00:52:52 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:52.891664 | orchestrator | 2025-06-02 00:52:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:55.938422 | orchestrator | 2025-06-02 00:52:55 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:55.940936 | orchestrator | 2025-06-02 00:52:55 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:55.940982 | orchestrator | 2025-06-02 00:52:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:52:58.981800 | orchestrator | 2025-06-02 00:52:58 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:52:58.982655 | orchestrator | 2025-06-02 00:52:58 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:52:58.982694 | orchestrator | 2025-06-02 00:52:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:02.032031 | orchestrator | 2025-06-02 00:53:02 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:53:02.033621 | orchestrator | 2025-06-02 00:53:02 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:02.033677 | orchestrator | 2025-06-02 00:53:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:05.087201 | orchestrator | 2025-06-02 00:53:05 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state STARTED 2025-06-02 00:53:05.088407 | orchestrator | 2025-06-02 00:53:05 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:05.088457 | orchestrator | 2025-06-02 00:53:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:08.139245 | orchestrator | 2025-06-02 00:53:08 | INFO  | Task f8692390-1a14-49aa-a5cc-dc899122e42f is in state SUCCESS 2025-06-02 00:53:08.142226 | orchestrator | 2025-06-02 00:53:08.142318 | orchestrator | 2025-06-02 00:53:08.142335 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-02 00:53:08.142349 | orchestrator | 2025-06-02 00:53:08.142360 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 00:53:08.142372 | orchestrator | Monday 02 June 2025 00:50:09 +0000 (0:00:00.094) 0:00:00.094 *********** 2025-06-02 00:53:08.142384 | orchestrator | ok: [localhost] => { 2025-06-02 00:53:08.142398 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-02 00:53:08.142410 | orchestrator | } 2025-06-02 00:53:08.142422 | orchestrator | 2025-06-02 00:53:08.142434 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-02 00:53:08.142445 | orchestrator | Monday 02 June 2025 00:50:09 +0000 (0:00:00.029) 0:00:00.124 *********** 2025-06-02 00:53:08.142456 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-02 00:53:08.142469 | orchestrator | ...ignoring 2025-06-02 00:53:08.142481 | orchestrator | 2025-06-02 00:53:08.142492 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-02 00:53:08.142503 | orchestrator | Monday 02 June 2025 00:50:12 +0000 (0:00:02.789) 0:00:02.913 *********** 2025-06-02 00:53:08.142548 | orchestrator | skipping: [localhost] 2025-06-02 00:53:08.142561 | orchestrator | 2025-06-02 00:53:08.142572 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-02 00:53:08.142583 | orchestrator | Monday 02 June 2025 00:50:12 +0000 (0:00:00.042) 0:00:02.956 *********** 2025-06-02 00:53:08.142594 | orchestrator | ok: [localhost] 2025-06-02 00:53:08.142605 | orchestrator | 2025-06-02 00:53:08.142616 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:53:08.142627 | orchestrator | 2025-06-02 00:53:08.142638 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:53:08.142649 | orchestrator | Monday 02 June 2025 00:50:12 +0000 (0:00:00.178) 0:00:03.134 *********** 2025-06-02 00:53:08.142660 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:53:08.142671 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:53:08.142682 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:53:08.142693 | orchestrator | 2025-06-02 00:53:08.142704 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:53:08.142715 | orchestrator | Monday 02 June 2025 00:50:13 +0000 (0:00:00.240) 0:00:03.375 *********** 2025-06-02 00:53:08.142726 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-02 00:53:08.142738 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-02 00:53:08.142749 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-02 00:53:08.142759 | orchestrator | 2025-06-02 00:53:08.142770 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-02 00:53:08.142781 | orchestrator | 2025-06-02 00:53:08.142793 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-02 00:53:08.142803 | orchestrator | Monday 02 June 2025 00:50:13 +0000 (0:00:00.539) 0:00:03.915 *********** 2025-06-02 00:53:08.142844 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 00:53:08.142855 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 00:53:08.142866 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 00:53:08.142877 | orchestrator | 2025-06-02 00:53:08.142888 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 00:53:08.142899 | orchestrator | Monday 02 June 2025 00:50:13 +0000 (0:00:00.273) 0:00:04.188 *********** 2025-06-02 00:53:08.142910 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:53:08.142922 | orchestrator | 2025-06-02 00:53:08.142933 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-02 00:53:08.142958 | orchestrator | Monday 02 June 2025 00:50:14 +0000 (0:00:00.418) 0:00:04.607 *********** 2025-06-02 00:53:08.142994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 00:53:08.143020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 00:53:08.143039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 00:53:08.143060 | orchestrator | 2025-06-02 00:53:08.143082 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-02 00:53:08.143093 | orchestrator | Monday 02 June 2025 00:50:16 +0000 (0:00:02.511) 0:00:07.118 *********** 2025-06-02 00:53:08.143104 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.143116 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.143127 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.143138 | orchestrator | 2025-06-02 00:53:08.143149 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-02 00:53:08.143160 | orchestrator | Monday 02 June 2025 00:50:17 +0000 (0:00:00.512) 0:00:07.631 *********** 2025-06-02 00:53:08.143171 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.143182 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.143193 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.143204 | orchestrator | 2025-06-02 00:53:08.143215 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-02 00:53:08.143226 | orchestrator | Monday 02 June 2025 00:50:18 +0000 (0:00:01.205) 0:00:08.836 *********** 2025-06-02 00:53:08.143238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 00:53:08.143264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 00:53:08.143286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 00:53:08.143298 | orchestrator | 2025-06-02 00:53:08.143309 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-02 00:53:08.143320 | orchestrator | Monday 02 June 2025 00:50:21 +0000 (0:00:02.528) 0:00:11.364 *********** 2025-06-02 00:53:08.143331 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.143342 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.143353 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.143364 | orchestrator | 2025-06-02 00:53:08.143375 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-02 00:53:08.143386 | orchestrator | Monday 02 June 2025 00:50:22 +0000 (0:00:01.009) 0:00:12.373 *********** 2025-06-02 00:53:08.143398 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.143413 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:53:08.143424 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:53:08.143436 | orchestrator | 2025-06-02 00:53:08.143447 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 00:53:08.143457 | orchestrator | Monday 02 June 2025 00:50:25 +0000 (0:00:03.106) 0:00:15.479 *********** 2025-06-02 00:53:08.143468 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:53:08.143489 | orchestrator | 2025-06-02 00:53:08.143501 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 00:53:08.143512 | orchestrator | Monday 02 June 2025 00:50:25 +0000 (0:00:00.485) 0:00:15.965 *********** 2025-06-02 00:53:08.143533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:53:08.143546 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:53:08.143563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:53:08.143582 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.143602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:53:08.143615 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.143626 | orchestrator | 2025-06-02 00:53:08.143637 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 00:53:08.143648 | orchestrator | Monday 02 June 2025 00:50:27 +0000 (0:00:02.113) 0:00:18.079 *********** 2025-06-02 00:53:08.143660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:53:08.143686 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:53:08.143718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:53:08.143741 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.143762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:53:08.143781 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.143799 | orchestrator | 2025-06-02 00:53:08.143850 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 00:53:08.143880 | orchestrator | Monday 02 June 2025 00:50:30 +0000 (0:00:02.221) 0:00:20.300 *********** 2025-06-02 00:53:08.143919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:53:08.143941 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:53:08.144007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:53:08.144020 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.144037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 00:53:08.144058 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.144069 | orchestrator | 2025-06-02 00:53:08.144080 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-02 00:53:08.144091 | orchestrator | Monday 02 June 2025 00:50:32 +0000 (0:00:02.099) 0:00:22.400 *********** 2025-06-02 00:53:08.144113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 00:53:08.144132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 00:53:08.144161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 00:53:08.144175 | orchestrator | 2025-06-02 00:53:08.144186 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-02 00:53:08.144197 | orchestrator | Monday 02 June 2025 00:50:34 +0000 (0:00:02.707) 0:00:25.107 *********** 2025-06-02 00:53:08.144208 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.144219 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:53:08.144230 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:53:08.144241 | orchestrator | 2025-06-02 00:53:08.144252 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-02 00:53:08.144274 | orchestrator | Monday 02 June 2025 00:50:35 +0000 (0:00:00.963) 0:00:26.071 *********** 2025-06-02 00:53:08.144285 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:53:08.144297 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:53:08.144308 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:53:08.144320 | orchestrator | 2025-06-02 00:53:08.144331 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-02 00:53:08.144342 | orchestrator | Monday 02 June 2025 00:50:36 +0000 (0:00:00.314) 0:00:26.385 *********** 2025-06-02 00:53:08.144353 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:53:08.144364 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:53:08.144375 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:53:08.144386 | orchestrator | 2025-06-02 00:53:08.144397 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-02 00:53:08.144408 | orchestrator | Monday 02 June 2025 00:50:36 +0000 (0:00:00.300) 0:00:26.685 *********** 2025-06-02 00:53:08.144420 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-02 00:53:08.144432 | orchestrator | ...ignoring 2025-06-02 00:53:08.144448 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-02 00:53:08.144459 | orchestrator | ...ignoring 2025-06-02 00:53:08.144471 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-02 00:53:08.144482 | orchestrator | ...ignoring 2025-06-02 00:53:08.144493 | orchestrator | 2025-06-02 00:53:08.144504 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-02 00:53:08.144515 | orchestrator | Monday 02 June 2025 00:50:47 +0000 (0:00:10.804) 0:00:37.489 *********** 2025-06-02 00:53:08.144526 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:53:08.144537 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:53:08.144548 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:53:08.144573 | orchestrator | 2025-06-02 00:53:08.144595 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-02 00:53:08.144607 | orchestrator | Monday 02 June 2025 00:50:47 +0000 (0:00:00.575) 0:00:38.064 *********** 2025-06-02 00:53:08.144618 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:53:08.144629 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.144640 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.144651 | orchestrator | 2025-06-02 00:53:08.144662 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-02 00:53:08.144673 | orchestrator | Monday 02 June 2025 00:50:48 +0000 (0:00:00.408) 0:00:38.473 *********** 2025-06-02 00:53:08.144684 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:53:08.144695 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.144706 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.144717 | orchestrator | 2025-06-02 00:53:08.144729 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-02 00:53:08.144740 | orchestrator | Monday 02 June 2025 00:50:48 +0000 (0:00:00.396) 0:00:38.869 *********** 2025-06-02 00:53:08.144750 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:53:08.144762 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.144773 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.144784 | orchestrator | 2025-06-02 00:53:08.144795 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-02 00:53:08.144840 | orchestrator | Monday 02 June 2025 00:50:49 +0000 (0:00:00.433) 0:00:39.303 *********** 2025-06-02 00:53:08.144853 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:53:08.144864 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:53:08.144875 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:53:08.144886 | orchestrator | 2025-06-02 00:53:08.144897 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-02 00:53:08.144916 | orchestrator | Monday 02 June 2025 00:50:49 +0000 (0:00:00.575) 0:00:39.879 *********** 2025-06-02 00:53:08.144927 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:53:08.144938 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.144949 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.144960 | orchestrator | 2025-06-02 00:53:08.144971 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 00:53:08.144982 | orchestrator | Monday 02 June 2025 00:50:50 +0000 (0:00:00.434) 0:00:40.313 *********** 2025-06-02 00:53:08.144993 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.145004 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.145015 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-02 00:53:08.145026 | orchestrator | 2025-06-02 00:53:08.145036 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-02 00:53:08.145047 | orchestrator | Monday 02 June 2025 00:50:50 +0000 (0:00:00.329) 0:00:40.643 *********** 2025-06-02 00:53:08.145058 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.145069 | orchestrator | 2025-06-02 00:53:08.145080 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-02 00:53:08.145091 | orchestrator | Monday 02 June 2025 00:51:00 +0000 (0:00:09.936) 0:00:50.579 *********** 2025-06-02 00:53:08.145101 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:53:08.145112 | orchestrator | 2025-06-02 00:53:08.145123 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 00:53:08.145134 | orchestrator | Monday 02 June 2025 00:51:00 +0000 (0:00:00.110) 0:00:50.689 *********** 2025-06-02 00:53:08.145145 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:53:08.145156 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.145167 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.145177 | orchestrator | 2025-06-02 00:53:08.145188 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-02 00:53:08.145199 | orchestrator | Monday 02 June 2025 00:51:01 +0000 (0:00:00.978) 0:00:51.668 *********** 2025-06-02 00:53:08.145210 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.145221 | orchestrator | 2025-06-02 00:53:08.145232 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-02 00:53:08.145243 | orchestrator | Monday 02 June 2025 00:51:08 +0000 (0:00:07.438) 0:00:59.106 *********** 2025-06-02 00:53:08.145253 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:53:08.145264 | orchestrator | 2025-06-02 00:53:08.145275 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-02 00:53:08.145286 | orchestrator | Monday 02 June 2025 00:51:10 +0000 (0:00:01.643) 0:01:00.749 *********** 2025-06-02 00:53:08.145297 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:53:08.145308 | orchestrator | 2025-06-02 00:53:08.145319 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-02 00:53:08.145330 | orchestrator | Monday 02 June 2025 00:51:12 +0000 (0:00:02.370) 0:01:03.120 *********** 2025-06-02 00:53:08.145341 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.145352 | orchestrator | 2025-06-02 00:53:08.145363 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-02 00:53:08.145374 | orchestrator | Monday 02 June 2025 00:51:13 +0000 (0:00:00.124) 0:01:03.244 *********** 2025-06-02 00:53:08.145385 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:53:08.145395 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.145407 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.145418 | orchestrator | 2025-06-02 00:53:08.145429 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-02 00:53:08.145445 | orchestrator | Monday 02 June 2025 00:51:13 +0000 (0:00:00.463) 0:01:03.708 *********** 2025-06-02 00:53:08.145456 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:53:08.145467 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-02 00:53:08.145478 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:53:08.145496 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:53:08.145507 | orchestrator | 2025-06-02 00:53:08.145518 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-02 00:53:08.145529 | orchestrator | skipping: no hosts matched 2025-06-02 00:53:08.145540 | orchestrator | 2025-06-02 00:53:08.145551 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 00:53:08.145561 | orchestrator | 2025-06-02 00:53:08.145572 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 00:53:08.145583 | orchestrator | Monday 02 June 2025 00:51:13 +0000 (0:00:00.301) 0:01:04.009 *********** 2025-06-02 00:53:08.145594 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:53:08.145605 | orchestrator | 2025-06-02 00:53:08.145616 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 00:53:08.145627 | orchestrator | Monday 02 June 2025 00:51:31 +0000 (0:00:18.178) 0:01:22.187 *********** 2025-06-02 00:53:08.145638 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:53:08.145648 | orchestrator | 2025-06-02 00:53:08.145659 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 00:53:08.145670 | orchestrator | Monday 02 June 2025 00:51:52 +0000 (0:00:20.615) 0:01:42.803 *********** 2025-06-02 00:53:08.145681 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:53:08.145692 | orchestrator | 2025-06-02 00:53:08.145703 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 00:53:08.145714 | orchestrator | 2025-06-02 00:53:08.145725 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 00:53:08.145736 | orchestrator | Monday 02 June 2025 00:51:54 +0000 (0:00:02.318) 0:01:45.122 *********** 2025-06-02 00:53:08.145747 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:53:08.145758 | orchestrator | 2025-06-02 00:53:08.145769 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 00:53:08.145787 | orchestrator | Monday 02 June 2025 00:52:18 +0000 (0:00:23.769) 0:02:08.892 *********** 2025-06-02 00:53:08.145799 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:53:08.145855 | orchestrator | 2025-06-02 00:53:08.145869 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 00:53:08.145880 | orchestrator | Monday 02 June 2025 00:52:34 +0000 (0:00:15.531) 0:02:24.423 *********** 2025-06-02 00:53:08.145891 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:53:08.145902 | orchestrator | 2025-06-02 00:53:08.145912 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-02 00:53:08.145923 | orchestrator | 2025-06-02 00:53:08.145934 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 00:53:08.145945 | orchestrator | Monday 02 June 2025 00:52:37 +0000 (0:00:02.881) 0:02:27.305 *********** 2025-06-02 00:53:08.145956 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.145967 | orchestrator | 2025-06-02 00:53:08.145978 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 00:53:08.145988 | orchestrator | Monday 02 June 2025 00:52:47 +0000 (0:00:10.134) 0:02:37.439 *********** 2025-06-02 00:53:08.145999 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:53:08.146010 | orchestrator | 2025-06-02 00:53:08.146072 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 00:53:08.146084 | orchestrator | Monday 02 June 2025 00:52:51 +0000 (0:00:04.507) 0:02:41.947 *********** 2025-06-02 00:53:08.146095 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:53:08.146106 | orchestrator | 2025-06-02 00:53:08.146117 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-02 00:53:08.146128 | orchestrator | 2025-06-02 00:53:08.146139 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-02 00:53:08.146150 | orchestrator | Monday 02 June 2025 00:52:53 +0000 (0:00:02.208) 0:02:44.156 *********** 2025-06-02 00:53:08.146160 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:53:08.146171 | orchestrator | 2025-06-02 00:53:08.146183 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-02 00:53:08.146201 | orchestrator | Monday 02 June 2025 00:52:54 +0000 (0:00:00.519) 0:02:44.675 *********** 2025-06-02 00:53:08.146212 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.146223 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.146234 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.146245 | orchestrator | 2025-06-02 00:53:08.146255 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-02 00:53:08.146266 | orchestrator | Monday 02 June 2025 00:52:56 +0000 (0:00:02.291) 0:02:46.967 *********** 2025-06-02 00:53:08.146277 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.146288 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.146299 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.146309 | orchestrator | 2025-06-02 00:53:08.146321 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-02 00:53:08.146331 | orchestrator | Monday 02 June 2025 00:52:58 +0000 (0:00:01.995) 0:02:48.963 *********** 2025-06-02 00:53:08.146342 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.146353 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.146364 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.146375 | orchestrator | 2025-06-02 00:53:08.146385 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-02 00:53:08.146394 | orchestrator | Monday 02 June 2025 00:53:00 +0000 (0:00:01.958) 0:02:50.921 *********** 2025-06-02 00:53:08.146404 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.146414 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.146423 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:53:08.146433 | orchestrator | 2025-06-02 00:53:08.146443 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-02 00:53:08.146452 | orchestrator | Monday 02 June 2025 00:53:02 +0000 (0:00:01.907) 0:02:52.828 *********** 2025-06-02 00:53:08.146462 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:53:08.146472 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:53:08.146482 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:53:08.146492 | orchestrator | 2025-06-02 00:53:08.146506 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-02 00:53:08.146516 | orchestrator | Monday 02 June 2025 00:53:05 +0000 (0:00:02.865) 0:02:55.694 *********** 2025-06-02 00:53:08.146526 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:53:08.146535 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:53:08.146545 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:53:08.146555 | orchestrator | 2025-06-02 00:53:08.146565 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:53:08.146574 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 00:53:08.146585 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-02 00:53:08.146596 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 00:53:08.146606 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 00:53:08.146616 | orchestrator | 2025-06-02 00:53:08.146626 | orchestrator | 2025-06-02 00:53:08.146635 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:53:08.146645 | orchestrator | Monday 02 June 2025 00:53:05 +0000 (0:00:00.210) 0:02:55.904 *********** 2025-06-02 00:53:08.146655 | orchestrator | =============================================================================== 2025-06-02 00:53:08.146664 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 41.95s 2025-06-02 00:53:08.146674 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.15s 2025-06-02 00:53:08.146701 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.80s 2025-06-02 00:53:08.146712 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.13s 2025-06-02 00:53:08.146721 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.94s 2025-06-02 00:53:08.146731 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.44s 2025-06-02 00:53:08.146741 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.20s 2025-06-02 00:53:08.146751 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.51s 2025-06-02 00:53:08.146760 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.11s 2025-06-02 00:53:08.146770 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.87s 2025-06-02 00:53:08.146779 | orchestrator | Check MariaDB service --------------------------------------------------- 2.79s 2025-06-02 00:53:08.146789 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.71s 2025-06-02 00:53:08.146799 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.53s 2025-06-02 00:53:08.146808 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.51s 2025-06-02 00:53:08.146833 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.37s 2025-06-02 00:53:08.146842 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.29s 2025-06-02 00:53:08.146852 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.22s 2025-06-02 00:53:08.146862 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.21s 2025-06-02 00:53:08.146871 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.11s 2025-06-02 00:53:08.146881 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.10s 2025-06-02 00:53:08.146891 | orchestrator | 2025-06-02 00:53:08 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:08.146901 | orchestrator | 2025-06-02 00:53:08 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:08.146911 | orchestrator | 2025-06-02 00:53:08 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:08.146921 | orchestrator | 2025-06-02 00:53:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:11.202628 | orchestrator | 2025-06-02 00:53:11 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:11.203233 | orchestrator | 2025-06-02 00:53:11 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:11.206989 | orchestrator | 2025-06-02 00:53:11 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:11.207065 | orchestrator | 2025-06-02 00:53:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:14.249477 | orchestrator | 2025-06-02 00:53:14 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:14.250574 | orchestrator | 2025-06-02 00:53:14 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:14.251602 | orchestrator | 2025-06-02 00:53:14 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:14.251892 | orchestrator | 2025-06-02 00:53:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:17.292228 | orchestrator | 2025-06-02 00:53:17 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:17.293494 | orchestrator | 2025-06-02 00:53:17 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:17.296056 | orchestrator | 2025-06-02 00:53:17 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:17.298158 | orchestrator | 2025-06-02 00:53:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:20.337035 | orchestrator | 2025-06-02 00:53:20 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:20.376490 | orchestrator | 2025-06-02 00:53:20 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:20.376592 | orchestrator | 2025-06-02 00:53:20 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:20.376609 | orchestrator | 2025-06-02 00:53:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:23.385369 | orchestrator | 2025-06-02 00:53:23 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:23.385477 | orchestrator | 2025-06-02 00:53:23 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:23.386310 | orchestrator | 2025-06-02 00:53:23 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:23.386337 | orchestrator | 2025-06-02 00:53:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:26.425334 | orchestrator | 2025-06-02 00:53:26 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:26.425848 | orchestrator | 2025-06-02 00:53:26 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:26.426897 | orchestrator | 2025-06-02 00:53:26 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:26.426933 | orchestrator | 2025-06-02 00:53:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:29.470216 | orchestrator | 2025-06-02 00:53:29 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:29.470320 | orchestrator | 2025-06-02 00:53:29 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:29.474447 | orchestrator | 2025-06-02 00:53:29 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:29.475664 | orchestrator | 2025-06-02 00:53:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:32.523369 | orchestrator | 2025-06-02 00:53:32 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:32.523481 | orchestrator | 2025-06-02 00:53:32 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:32.523504 | orchestrator | 2025-06-02 00:53:32 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:32.523536 | orchestrator | 2025-06-02 00:53:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:35.570673 | orchestrator | 2025-06-02 00:53:35 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:35.571298 | orchestrator | 2025-06-02 00:53:35 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:35.572121 | orchestrator | 2025-06-02 00:53:35 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:35.572148 | orchestrator | 2025-06-02 00:53:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:38.613439 | orchestrator | 2025-06-02 00:53:38 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:38.613569 | orchestrator | 2025-06-02 00:53:38 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:38.614899 | orchestrator | 2025-06-02 00:53:38 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:38.614973 | orchestrator | 2025-06-02 00:53:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:41.659214 | orchestrator | 2025-06-02 00:53:41 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:41.662387 | orchestrator | 2025-06-02 00:53:41 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:41.664987 | orchestrator | 2025-06-02 00:53:41 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:41.665017 | orchestrator | 2025-06-02 00:53:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:44.696298 | orchestrator | 2025-06-02 00:53:44 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:44.697991 | orchestrator | 2025-06-02 00:53:44 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:44.699601 | orchestrator | 2025-06-02 00:53:44 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:44.699640 | orchestrator | 2025-06-02 00:53:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:47.744318 | orchestrator | 2025-06-02 00:53:47 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:47.747868 | orchestrator | 2025-06-02 00:53:47 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:47.748938 | orchestrator | 2025-06-02 00:53:47 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:47.749044 | orchestrator | 2025-06-02 00:53:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:50.785203 | orchestrator | 2025-06-02 00:53:50 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:50.786209 | orchestrator | 2025-06-02 00:53:50 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:50.788906 | orchestrator | 2025-06-02 00:53:50 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:50.789019 | orchestrator | 2025-06-02 00:53:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:53.832561 | orchestrator | 2025-06-02 00:53:53 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:53.833679 | orchestrator | 2025-06-02 00:53:53 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:53.834996 | orchestrator | 2025-06-02 00:53:53 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:53.835024 | orchestrator | 2025-06-02 00:53:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:56.876264 | orchestrator | 2025-06-02 00:53:56 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:56.877856 | orchestrator | 2025-06-02 00:53:56 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:56.881664 | orchestrator | 2025-06-02 00:53:56 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:56.881702 | orchestrator | 2025-06-02 00:53:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:53:59.926312 | orchestrator | 2025-06-02 00:53:59 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:53:59.926503 | orchestrator | 2025-06-02 00:53:59 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:53:59.928089 | orchestrator | 2025-06-02 00:53:59 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:53:59.928133 | orchestrator | 2025-06-02 00:53:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:02.978347 | orchestrator | 2025-06-02 00:54:02 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:54:02.979348 | orchestrator | 2025-06-02 00:54:02 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:02.980994 | orchestrator | 2025-06-02 00:54:02 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:02.981020 | orchestrator | 2025-06-02 00:54:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:06.016473 | orchestrator | 2025-06-02 00:54:06 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state STARTED 2025-06-02 00:54:06.017717 | orchestrator | 2025-06-02 00:54:06 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:06.021438 | orchestrator | 2025-06-02 00:54:06 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:06.021469 | orchestrator | 2025-06-02 00:54:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:09.072272 | orchestrator | 2025-06-02 00:54:09 | INFO  | Task ff356bc4-2fd9-4077-b71b-a09522ad2f2c is in state STARTED 2025-06-02 00:54:09.076090 | orchestrator | 2025-06-02 00:54:09 | INFO  | Task b44a4afd-3070-4169-9880-52f2e3a186d2 is in state SUCCESS 2025-06-02 00:54:09.078115 | orchestrator | 2025-06-02 00:54:09.078160 | orchestrator | 2025-06-02 00:54:09.078201 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-02 00:54:09.078215 | orchestrator | 2025-06-02 00:54:09.078227 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 00:54:09.078238 | orchestrator | Monday 02 June 2025 00:52:03 +0000 (0:00:00.513) 0:00:00.513 *********** 2025-06-02 00:54:09.078250 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:54:09.078270 | orchestrator | 2025-06-02 00:54:09.078288 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 00:54:09.078300 | orchestrator | Monday 02 June 2025 00:52:04 +0000 (0:00:00.498) 0:00:01.012 *********** 2025-06-02 00:54:09.078311 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.078325 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.078337 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.078356 | orchestrator | 2025-06-02 00:54:09.078376 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 00:54:09.078533 | orchestrator | Monday 02 June 2025 00:52:04 +0000 (0:00:00.583) 0:00:01.595 *********** 2025-06-02 00:54:09.078547 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.078558 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.078569 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.078580 | orchestrator | 2025-06-02 00:54:09.078592 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 00:54:09.078603 | orchestrator | Monday 02 June 2025 00:52:04 +0000 (0:00:00.231) 0:00:01.827 *********** 2025-06-02 00:54:09.078614 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.078624 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.078636 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.078646 | orchestrator | 2025-06-02 00:54:09.078658 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 00:54:09.078669 | orchestrator | Monday 02 June 2025 00:52:05 +0000 (0:00:00.678) 0:00:02.506 *********** 2025-06-02 00:54:09.078679 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.078690 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.078701 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.078712 | orchestrator | 2025-06-02 00:54:09.078749 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 00:54:09.078770 | orchestrator | Monday 02 June 2025 00:52:05 +0000 (0:00:00.266) 0:00:02.772 *********** 2025-06-02 00:54:09.079631 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.079671 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.079710 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.079759 | orchestrator | 2025-06-02 00:54:09.079808 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 00:54:09.079828 | orchestrator | Monday 02 June 2025 00:52:06 +0000 (0:00:00.234) 0:00:03.007 *********** 2025-06-02 00:54:09.079844 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.079878 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.079899 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.079918 | orchestrator | 2025-06-02 00:54:09.079934 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 00:54:09.079946 | orchestrator | Monday 02 June 2025 00:52:06 +0000 (0:00:00.216) 0:00:03.224 *********** 2025-06-02 00:54:09.079961 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.079982 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.080001 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.080017 | orchestrator | 2025-06-02 00:54:09.080029 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 00:54:09.080041 | orchestrator | Monday 02 June 2025 00:52:06 +0000 (0:00:00.296) 0:00:03.521 *********** 2025-06-02 00:54:09.080061 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.080081 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.080100 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.080112 | orchestrator | 2025-06-02 00:54:09.080123 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 00:54:09.080134 | orchestrator | Monday 02 June 2025 00:52:06 +0000 (0:00:00.233) 0:00:03.754 *********** 2025-06-02 00:54:09.080146 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 00:54:09.080157 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 00:54:09.080168 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 00:54:09.080179 | orchestrator | 2025-06-02 00:54:09.080190 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 00:54:09.080201 | orchestrator | Monday 02 June 2025 00:52:07 +0000 (0:00:00.624) 0:00:04.379 *********** 2025-06-02 00:54:09.080212 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.080223 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.080235 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.080246 | orchestrator | 2025-06-02 00:54:09.080258 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 00:54:09.080269 | orchestrator | Monday 02 June 2025 00:52:07 +0000 (0:00:00.356) 0:00:04.736 *********** 2025-06-02 00:54:09.080280 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 00:54:09.080290 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 00:54:09.080301 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 00:54:09.080312 | orchestrator | 2025-06-02 00:54:09.080323 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 00:54:09.080334 | orchestrator | Monday 02 June 2025 00:52:09 +0000 (0:00:01.963) 0:00:06.700 *********** 2025-06-02 00:54:09.080345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 00:54:09.080356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 00:54:09.080367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 00:54:09.080379 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.080390 | orchestrator | 2025-06-02 00:54:09.080401 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 00:54:09.080425 | orchestrator | Monday 02 June 2025 00:52:10 +0000 (0:00:00.343) 0:00:07.043 *********** 2025-06-02 00:54:09.080449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.080474 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.080486 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.080497 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.080508 | orchestrator | 2025-06-02 00:54:09.080519 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 00:54:09.080530 | orchestrator | Monday 02 June 2025 00:52:10 +0000 (0:00:00.735) 0:00:07.779 *********** 2025-06-02 00:54:09.080543 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.080557 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.080569 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.080580 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.080591 | orchestrator | 2025-06-02 00:54:09.080603 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 00:54:09.080614 | orchestrator | Monday 02 June 2025 00:52:10 +0000 (0:00:00.145) 0:00:07.924 *********** 2025-06-02 00:54:09.080627 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b99ecfd2fdb7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 00:52:08.396058', 'end': '2025-06-02 00:52:08.444719', 'delta': '0:00:00.048661', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b99ecfd2fdb7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-02 00:54:09.080642 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '68b4fe0b4c07', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 00:52:09.049424', 'end': '2025-06-02 00:52:09.095992', 'delta': '0:00:00.046568', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['68b4fe0b4c07'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-02 00:54:09.080676 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '325cc7d2ea5e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 00:52:09.561255', 'end': '2025-06-02 00:52:09.611964', 'delta': '0:00:00.050709', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['325cc7d2ea5e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-02 00:54:09.080689 | orchestrator | 2025-06-02 00:54:09.080700 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 00:54:09.080712 | orchestrator | Monday 02 June 2025 00:52:11 +0000 (0:00:00.343) 0:00:08.267 *********** 2025-06-02 00:54:09.080785 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.080807 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.080819 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.080830 | orchestrator | 2025-06-02 00:54:09.080841 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 00:54:09.080852 | orchestrator | Monday 02 June 2025 00:52:11 +0000 (0:00:00.404) 0:00:08.671 *********** 2025-06-02 00:54:09.080863 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-02 00:54:09.080874 | orchestrator | 2025-06-02 00:54:09.080885 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 00:54:09.080896 | orchestrator | Monday 02 June 2025 00:52:13 +0000 (0:00:01.615) 0:00:10.287 *********** 2025-06-02 00:54:09.080907 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.080918 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.080929 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.080940 | orchestrator | 2025-06-02 00:54:09.080951 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 00:54:09.080976 | orchestrator | Monday 02 June 2025 00:52:13 +0000 (0:00:00.258) 0:00:10.546 *********** 2025-06-02 00:54:09.080988 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.080999 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.081022 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.081033 | orchestrator | 2025-06-02 00:54:09.081044 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 00:54:09.081055 | orchestrator | Monday 02 June 2025 00:52:13 +0000 (0:00:00.372) 0:00:10.918 *********** 2025-06-02 00:54:09.081066 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.081077 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.081088 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.081099 | orchestrator | 2025-06-02 00:54:09.081110 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 00:54:09.081121 | orchestrator | Monday 02 June 2025 00:52:14 +0000 (0:00:00.424) 0:00:11.342 *********** 2025-06-02 00:54:09.081132 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.081143 | orchestrator | 2025-06-02 00:54:09.081154 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 00:54:09.081165 | orchestrator | Monday 02 June 2025 00:52:14 +0000 (0:00:00.121) 0:00:11.464 *********** 2025-06-02 00:54:09.081175 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.081186 | orchestrator | 2025-06-02 00:54:09.081197 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 00:54:09.081208 | orchestrator | Monday 02 June 2025 00:52:14 +0000 (0:00:00.225) 0:00:11.690 *********** 2025-06-02 00:54:09.081219 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.081230 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.081241 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.081260 | orchestrator | 2025-06-02 00:54:09.081271 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 00:54:09.081282 | orchestrator | Monday 02 June 2025 00:52:14 +0000 (0:00:00.273) 0:00:11.963 *********** 2025-06-02 00:54:09.081293 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.081304 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.081315 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.081326 | orchestrator | 2025-06-02 00:54:09.081337 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 00:54:09.081348 | orchestrator | Monday 02 June 2025 00:52:15 +0000 (0:00:00.331) 0:00:12.295 *********** 2025-06-02 00:54:09.081359 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.081370 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.081381 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.081392 | orchestrator | 2025-06-02 00:54:09.081403 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 00:54:09.081414 | orchestrator | Monday 02 June 2025 00:52:15 +0000 (0:00:00.455) 0:00:12.750 *********** 2025-06-02 00:54:09.081425 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.081436 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.081447 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.081458 | orchestrator | 2025-06-02 00:54:09.081469 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 00:54:09.081480 | orchestrator | Monday 02 June 2025 00:52:16 +0000 (0:00:00.351) 0:00:13.102 *********** 2025-06-02 00:54:09.081491 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.081501 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.081512 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.081523 | orchestrator | 2025-06-02 00:54:09.081534 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 00:54:09.081545 | orchestrator | Monday 02 June 2025 00:52:16 +0000 (0:00:00.297) 0:00:13.399 *********** 2025-06-02 00:54:09.081556 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.081567 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.081578 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.081588 | orchestrator | 2025-06-02 00:54:09.081600 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 00:54:09.081624 | orchestrator | Monday 02 June 2025 00:52:16 +0000 (0:00:00.307) 0:00:13.707 *********** 2025-06-02 00:54:09.081636 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.081647 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.081658 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.081669 | orchestrator | 2025-06-02 00:54:09.081680 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 00:54:09.081691 | orchestrator | Monday 02 June 2025 00:52:17 +0000 (0:00:00.444) 0:00:14.151 *********** 2025-06-02 00:54:09.081703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3a2aacf8--31c8--546a--a559--f7f9618b27d4-osd--block--3a2aacf8--31c8--546a--a559--f7f9618b27d4', 'dm-uuid-LVM-NNCStWpcr9tenQmNxri7LASeTRMcEv6AoOnwkS7N482btF35qnYY416n1aLIbtP8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1905453d--e612--5c47--8424--6bc4888ba216-osd--block--1905453d--e612--5c47--8424--6bc4888ba216', 'dm-uuid-LVM-bmI6VfEwWdXz2xP9C2LcPSFfTFAwKWN8tG6LgHTqOcSBqmax2tJJF1Q3vaj0PA1J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--89fe9f69--ec16--58f3--8212--bc080cf4c28c-osd--block--89fe9f69--ec16--58f3--8212--bc080cf4c28c', 'dm-uuid-LVM-ognnUHwkOr4oV4bQOavTnlv8gd9RlZxuN1XyIq7r9rNVLx10b02DvAy6y4irDY5P'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3a308c11--b64c--503e--b49b--4b3a12050ecf-osd--block--3a308c11--b64c--503e--b49b--4b3a12050ecf', 'dm-uuid-LVM-lh09BsjJdtc94H2oQxQdRnzKfwmOYgWsiBp0OoPA26YaAK1R1G3gj2Iu4RnsSePy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.081932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3a2aacf8--31c8--546a--a559--f7f9618b27d4-osd--block--3a2aacf8--31c8--546a--a559--f7f9618b27d4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z2iHD6-ULgy-BkLr-xDEJ-3xhd-8Hdb-xZrL0x', 'scsi-0QEMU_QEMU_HARDDISK_e440bdc3-1867-4817-abfb-a8a36f681931', 'scsi-SQEMU_QEMU_HARDDISK_e440bdc3-1867-4817-abfb-a8a36f681931'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.081963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1905453d--e612--5c47--8424--6bc4888ba216-osd--block--1905453d--e612--5c47--8424--6bc4888ba216'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-t7nJSk-psuA-nDo5-CXH5-9b9Q-apxe-e719j6', 'scsi-0QEMU_QEMU_HARDDISK_ee21d93f-61cf-428c-a6c4-8efe670724e1', 'scsi-SQEMU_QEMU_HARDDISK_ee21d93f-61cf-428c-a6c4-8efe670724e1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.081986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.081998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa9fa188-919c-438b-be4f-34a22a00bea2', 'scsi-SQEMU_QEMU_HARDDISK_fa9fa188-919c-438b-be4f-34a22a00bea2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.082009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.082123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082164 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.082185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part1', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part14', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part15', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part16', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.082351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--89fe9f69--ec16--58f3--8212--bc080cf4c28c-osd--block--89fe9f69--ec16--58f3--8212--bc080cf4c28c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Pd0Enh-ZyGr-1RZz-WfSA-5okV-5eB7-miMFkN', 'scsi-0QEMU_QEMU_HARDDISK_ba4d1aaf-78c8-4549-a686-67bb8e50d69d', 'scsi-SQEMU_QEMU_HARDDISK_ba4d1aaf-78c8-4549-a686-67bb8e50d69d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.082371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3a308c11--b64c--503e--b49b--4b3a12050ecf-osd--block--3a308c11--b64c--503e--b49b--4b3a12050ecf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Q2xvrl-5jo8-WvCY-nGjg-aqiH-iAhZ-K3eckw', 'scsi-0QEMU_QEMU_HARDDISK_4ace89ec-5901-4181-9aea-4e5d559a0cfd', 'scsi-SQEMU_QEMU_HARDDISK_4ace89ec-5901-4181-9aea-4e5d559a0cfd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.082383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644-osd--block--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644', 'dm-uuid-LVM-avSIsvovV4pOZUqGYx7LvX2X2ezUL6JLR2N4CiJgOJgCbEu5wAT023vZOdeKr6HB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbd34322-9953-4267-815d-84376d8605a5', 'scsi-SQEMU_QEMU_HARDDISK_bbd34322-9953-4267-815d-84376d8605a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.082407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17a6e190--aa70--5b53--9f6a--9d016360bd22-osd--block--17a6e190--aa70--5b53--9f6a--9d016360bd22', 'dm-uuid-LVM-fS3KfuMJUAk8TssYvM3o8inwlApLtYRI1qvo6Tzwi5hYLdJhvnuVrwB79sNe3JWX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.082443 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.082455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 00:54:09.082565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.082586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644-osd--block--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GF3yfX-GBy3-gDda-qdDG-hLeU-qQZm-CrHybA', 'scsi-0QEMU_QEMU_HARDDISK_e8887d11-63ae-4566-a11b-b67b45b1443e', 'scsi-SQEMU_QEMU_HARDDISK_e8887d11-63ae-4566-a11b-b67b45b1443e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.082598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--17a6e190--aa70--5b53--9f6a--9d016360bd22-osd--block--17a6e190--aa70--5b53--9f6a--9d016360bd22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ku3b3c-npBj-z1Yj-LXYu-ex50-hTYq-uEKlYj', 'scsi-0QEMU_QEMU_HARDDISK_9dfee06d-65ab-44da-8413-6b371a116172', 'scsi-SQEMU_QEMU_HARDDISK_9dfee06d-65ab-44da-8413-6b371a116172'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.082610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e09092a-0107-49ca-ae5a-eacfcf6197eb', 'scsi-SQEMU_QEMU_HARDDISK_0e09092a-0107-49ca-ae5a-eacfcf6197eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.082632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 00:54:09.082650 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.082662 | orchestrator | 2025-06-02 00:54:09.082673 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 00:54:09.082685 | orchestrator | Monday 02 June 2025 00:52:17 +0000 (0:00:00.541) 0:00:14.693 *********** 2025-06-02 00:54:09.082697 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3a2aacf8--31c8--546a--a559--f7f9618b27d4-osd--block--3a2aacf8--31c8--546a--a559--f7f9618b27d4', 'dm-uuid-LVM-NNCStWpcr9tenQmNxri7LASeTRMcEv6AoOnwkS7N482btF35qnYY416n1aLIbtP8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1905453d--e612--5c47--8424--6bc4888ba216-osd--block--1905453d--e612--5c47--8424--6bc4888ba216', 'dm-uuid-LVM-bmI6VfEwWdXz2xP9C2LcPSFfTFAwKWN8tG6LgHTqOcSBqmax2tJJF1Q3vaj0PA1J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082746 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082760 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082772 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082796 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082827 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082839 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--89fe9f69--ec16--58f3--8212--bc080cf4c28c-osd--block--89fe9f69--ec16--58f3--8212--bc080cf4c28c', 'dm-uuid-LVM-ognnUHwkOr4oV4bQOavTnlv8gd9RlZxuN1XyIq7r9rNVLx10b02DvAy6y4irDY5P'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082851 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082862 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3a308c11--b64c--503e--b49b--4b3a12050ecf-osd--block--3a308c11--b64c--503e--b49b--4b3a12050ecf', 'dm-uuid-LVM-lh09BsjJdtc94H2oQxQdRnzKfwmOYgWsiBp0OoPA26YaAK1R1G3gj2Iu4RnsSePy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082886 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082904 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082917 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5e0c2db-a039-40a9-94ad-8a36749fe93f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082930 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082959 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3a2aacf8--31c8--546a--a559--f7f9618b27d4-osd--block--3a2aacf8--31c8--546a--a559--f7f9618b27d4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z2iHD6-ULgy-BkLr-xDEJ-3xhd-8Hdb-xZrL0x', 'scsi-0QEMU_QEMU_HARDDISK_e440bdc3-1867-4817-abfb-a8a36f681931', 'scsi-SQEMU_QEMU_HARDDISK_e440bdc3-1867-4817-abfb-a8a36f681931'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082972 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082984 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1905453d--e612--5c47--8424--6bc4888ba216-osd--block--1905453d--e612--5c47--8424--6bc4888ba216'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-t7nJSk-psuA-nDo5-CXH5-9b9Q-apxe-e719j6', 'scsi-0QEMU_QEMU_HARDDISK_ee21d93f-61cf-428c-a6c4-8efe670724e1', 'scsi-SQEMU_QEMU_HARDDISK_ee21d93f-61cf-428c-a6c4-8efe670724e1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.082996 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083007 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa9fa188-919c-438b-be4f-34a22a00bea2', 'scsi-SQEMU_QEMU_HARDDISK_fa9fa188-919c-438b-be4f-34a22a00bea2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083038 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083052 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083063 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.083075 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083087 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083099 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083119 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part1', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part14', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part15', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part16', 'scsi-SQEMU_QEMU_HARDDISK_24a30ba4-1f7c-48df-a98c-7d1e4021ab04-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644-osd--block--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644', 'dm-uuid-LVM-avSIsvovV4pOZUqGYx7LvX2X2ezUL6JLR2N4CiJgOJgCbEu5wAT023vZOdeKr6HB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083152 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--89fe9f69--ec16--58f3--8212--bc080cf4c28c-osd--block--89fe9f69--ec16--58f3--8212--bc080cf4c28c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Pd0Enh-ZyGr-1RZz-WfSA-5okV-5eB7-miMFkN', 'scsi-0QEMU_QEMU_HARDDISK_ba4d1aaf-78c8-4549-a686-67bb8e50d69d', 'scsi-SQEMU_QEMU_HARDDISK_ba4d1aaf-78c8-4549-a686-67bb8e50d69d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083164 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17a6e190--aa70--5b53--9f6a--9d016360bd22-osd--block--17a6e190--aa70--5b53--9f6a--9d016360bd22', 'dm-uuid-LVM-fS3KfuMJUAk8TssYvM3o8inwlApLtYRI1qvo6Tzwi5hYLdJhvnuVrwB79sNe3JWX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083275 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3a308c11--b64c--503e--b49b--4b3a12050ecf-osd--block--3a308c11--b64c--503e--b49b--4b3a12050ecf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Q2xvrl-5jo8-WvCY-nGjg-aqiH-iAhZ-K3eckw', 'scsi-0QEMU_QEMU_HARDDISK_4ace89ec-5901-4181-9aea-4e5d559a0cfd', 'scsi-SQEMU_QEMU_HARDDISK_4ace89ec-5901-4181-9aea-4e5d559a0cfd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083299 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbd34322-9953-4267-815d-84376d8605a5', 'scsi-SQEMU_QEMU_HARDDISK_bbd34322-9953-4267-815d-84376d8605a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083311 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083334 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083352 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.083363 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083388 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083400 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083412 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083424 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083436 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083460 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e8c5ff5-c57d-4c08-96dd-e9836efdc119-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083480 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644-osd--block--93d4fc0b--cb5c--5d00--94e8--8a1d2b9f8644'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GF3yfX-GBy3-gDda-qdDG-hLeU-qQZm-CrHybA', 'scsi-0QEMU_QEMU_HARDDISK_e8887d11-63ae-4566-a11b-b67b45b1443e', 'scsi-SQEMU_QEMU_HARDDISK_e8887d11-63ae-4566-a11b-b67b45b1443e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083493 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--17a6e190--aa70--5b53--9f6a--9d016360bd22-osd--block--17a6e190--aa70--5b53--9f6a--9d016360bd22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ku3b3c-npBj-z1Yj-LXYu-ex50-hTYq-uEKlYj', 'scsi-0QEMU_QEMU_HARDDISK_9dfee06d-65ab-44da-8413-6b371a116172', 'scsi-SQEMU_QEMU_HARDDISK_9dfee06d-65ab-44da-8413-6b371a116172'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083511 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e09092a-0107-49ca-ae5a-eacfcf6197eb', 'scsi-SQEMU_QEMU_HARDDISK_0e09092a-0107-49ca-ae5a-eacfcf6197eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083535 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 00:54:09.083547 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.083558 | orchestrator | 2025-06-02 00:54:09.083569 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 00:54:09.083581 | orchestrator | Monday 02 June 2025 00:52:18 +0000 (0:00:00.523) 0:00:15.216 *********** 2025-06-02 00:54:09.083592 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.083603 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.083615 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.083626 | orchestrator | 2025-06-02 00:54:09.083638 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 00:54:09.083649 | orchestrator | Monday 02 June 2025 00:52:18 +0000 (0:00:00.641) 0:00:15.857 *********** 2025-06-02 00:54:09.083660 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.083671 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.083682 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.083693 | orchestrator | 2025-06-02 00:54:09.083704 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 00:54:09.083715 | orchestrator | Monday 02 June 2025 00:52:19 +0000 (0:00:00.416) 0:00:16.274 *********** 2025-06-02 00:54:09.083795 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.083809 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.083820 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.083831 | orchestrator | 2025-06-02 00:54:09.083842 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 00:54:09.083853 | orchestrator | Monday 02 June 2025 00:52:19 +0000 (0:00:00.597) 0:00:16.871 *********** 2025-06-02 00:54:09.083864 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.083875 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.083886 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.083897 | orchestrator | 2025-06-02 00:54:09.083908 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 00:54:09.083919 | orchestrator | Monday 02 June 2025 00:52:20 +0000 (0:00:00.251) 0:00:17.122 *********** 2025-06-02 00:54:09.083930 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.083941 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.083952 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.083963 | orchestrator | 2025-06-02 00:54:09.083974 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 00:54:09.083984 | orchestrator | Monday 02 June 2025 00:52:20 +0000 (0:00:00.377) 0:00:17.500 *********** 2025-06-02 00:54:09.084003 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.084014 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.084025 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.084036 | orchestrator | 2025-06-02 00:54:09.084047 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 00:54:09.084058 | orchestrator | Monday 02 June 2025 00:52:20 +0000 (0:00:00.435) 0:00:17.936 *********** 2025-06-02 00:54:09.084069 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 00:54:09.084080 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 00:54:09.084092 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 00:54:09.084103 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 00:54:09.084114 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 00:54:09.084125 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 00:54:09.084135 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 00:54:09.084146 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 00:54:09.084157 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 00:54:09.084168 | orchestrator | 2025-06-02 00:54:09.084179 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 00:54:09.084190 | orchestrator | Monday 02 June 2025 00:52:21 +0000 (0:00:00.799) 0:00:18.735 *********** 2025-06-02 00:54:09.084201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 00:54:09.084212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 00:54:09.084223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 00:54:09.084234 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.084245 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 00:54:09.084256 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 00:54:09.084267 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 00:54:09.084278 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.084288 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 00:54:09.084297 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 00:54:09.084307 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 00:54:09.084317 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.084327 | orchestrator | 2025-06-02 00:54:09.084336 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 00:54:09.084346 | orchestrator | Monday 02 June 2025 00:52:22 +0000 (0:00:00.294) 0:00:19.030 *********** 2025-06-02 00:54:09.084356 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:54:09.084366 | orchestrator | 2025-06-02 00:54:09.084376 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 00:54:09.084386 | orchestrator | Monday 02 June 2025 00:52:22 +0000 (0:00:00.625) 0:00:19.656 *********** 2025-06-02 00:54:09.084396 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.084406 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.084416 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.084426 | orchestrator | 2025-06-02 00:54:09.084448 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 00:54:09.084459 | orchestrator | Monday 02 June 2025 00:52:23 +0000 (0:00:00.309) 0:00:19.965 *********** 2025-06-02 00:54:09.084468 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.084478 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.084488 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.084498 | orchestrator | 2025-06-02 00:54:09.084508 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 00:54:09.084524 | orchestrator | Monday 02 June 2025 00:52:23 +0000 (0:00:00.298) 0:00:20.263 *********** 2025-06-02 00:54:09.084534 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.084544 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.084554 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:54:09.084563 | orchestrator | 2025-06-02 00:54:09.084573 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 00:54:09.084583 | orchestrator | Monday 02 June 2025 00:52:23 +0000 (0:00:00.293) 0:00:20.557 *********** 2025-06-02 00:54:09.084592 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.084602 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.084612 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.084622 | orchestrator | 2025-06-02 00:54:09.084632 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 00:54:09.084642 | orchestrator | Monday 02 June 2025 00:52:24 +0000 (0:00:00.552) 0:00:21.109 *********** 2025-06-02 00:54:09.084652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:54:09.084661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:54:09.084671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:54:09.084681 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.084690 | orchestrator | 2025-06-02 00:54:09.084700 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 00:54:09.084710 | orchestrator | Monday 02 June 2025 00:52:24 +0000 (0:00:00.361) 0:00:21.471 *********** 2025-06-02 00:54:09.084737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:54:09.084756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:54:09.084774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:54:09.084790 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.084802 | orchestrator | 2025-06-02 00:54:09.084812 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 00:54:09.084822 | orchestrator | Monday 02 June 2025 00:52:24 +0000 (0:00:00.346) 0:00:21.818 *********** 2025-06-02 00:54:09.084832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 00:54:09.084841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 00:54:09.084851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 00:54:09.084860 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.084870 | orchestrator | 2025-06-02 00:54:09.084879 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 00:54:09.084889 | orchestrator | Monday 02 June 2025 00:52:25 +0000 (0:00:00.342) 0:00:22.161 *********** 2025-06-02 00:54:09.084899 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:54:09.084909 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:54:09.084918 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:54:09.084928 | orchestrator | 2025-06-02 00:54:09.084938 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 00:54:09.084947 | orchestrator | Monday 02 June 2025 00:52:25 +0000 (0:00:00.302) 0:00:22.464 *********** 2025-06-02 00:54:09.084957 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 00:54:09.084967 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 00:54:09.084976 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 00:54:09.084986 | orchestrator | 2025-06-02 00:54:09.084996 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 00:54:09.085006 | orchestrator | Monday 02 June 2025 00:52:25 +0000 (0:00:00.454) 0:00:22.918 *********** 2025-06-02 00:54:09.085015 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 00:54:09.085025 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 00:54:09.085035 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 00:54:09.085044 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 00:54:09.085061 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 00:54:09.085071 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 00:54:09.085080 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 00:54:09.085090 | orchestrator | 2025-06-02 00:54:09.085099 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 00:54:09.085109 | orchestrator | Monday 02 June 2025 00:52:26 +0000 (0:00:00.887) 0:00:23.806 *********** 2025-06-02 00:54:09.085118 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 00:54:09.085128 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 00:54:09.085138 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 00:54:09.085147 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 00:54:09.085157 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 00:54:09.085167 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 00:54:09.085177 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 00:54:09.085186 | orchestrator | 2025-06-02 00:54:09.085206 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-02 00:54:09.085217 | orchestrator | Monday 02 June 2025 00:52:28 +0000 (0:00:01.854) 0:00:25.661 *********** 2025-06-02 00:54:09.085227 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:54:09.085236 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:54:09.085246 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-02 00:54:09.085255 | orchestrator | 2025-06-02 00:54:09.085265 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-02 00:54:09.085275 | orchestrator | Monday 02 June 2025 00:52:29 +0000 (0:00:00.341) 0:00:26.002 *********** 2025-06-02 00:54:09.085285 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 00:54:09.085296 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 00:54:09.085306 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 00:54:09.085316 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 00:54:09.085326 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 00:54:09.085336 | orchestrator | 2025-06-02 00:54:09.085346 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-02 00:54:09.085356 | orchestrator | Monday 02 June 2025 00:53:13 +0000 (0:00:44.789) 0:01:10.792 *********** 2025-06-02 00:54:09.085365 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085380 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085390 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085400 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085410 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085419 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085429 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-02 00:54:09.085438 | orchestrator | 2025-06-02 00:54:09.085448 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-02 00:54:09.085457 | orchestrator | Monday 02 June 2025 00:53:37 +0000 (0:00:23.186) 0:01:33.978 *********** 2025-06-02 00:54:09.085467 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085477 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085486 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085496 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085505 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085515 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085525 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 00:54:09.085534 | orchestrator | 2025-06-02 00:54:09.085544 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-02 00:54:09.085554 | orchestrator | Monday 02 June 2025 00:53:48 +0000 (0:00:11.732) 0:01:45.711 *********** 2025-06-02 00:54:09.085563 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085573 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 00:54:09.085582 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 00:54:09.085592 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085602 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 00:54:09.085611 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 00:54:09.085635 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085645 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 00:54:09.085655 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 00:54:09.085665 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085674 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 00:54:09.085684 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 00:54:09.085694 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085703 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 00:54:09.085713 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 00:54:09.085743 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 00:54:09.085754 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 00:54:09.085764 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 00:54:09.085774 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-02 00:54:09.085789 | orchestrator | 2025-06-02 00:54:09.085799 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:54:09.085809 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-02 00:54:09.085821 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-02 00:54:09.085831 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 00:54:09.085841 | orchestrator | 2025-06-02 00:54:09.085851 | orchestrator | 2025-06-02 00:54:09.085860 | orchestrator | 2025-06-02 00:54:09.085870 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:54:09.085880 | orchestrator | Monday 02 June 2025 00:54:05 +0000 (0:00:16.888) 0:02:02.599 *********** 2025-06-02 00:54:09.085889 | orchestrator | =============================================================================== 2025-06-02 00:54:09.085899 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.79s 2025-06-02 00:54:09.085909 | orchestrator | generate keys ---------------------------------------------------------- 23.19s 2025-06-02 00:54:09.085918 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.89s 2025-06-02 00:54:09.085928 | orchestrator | get keys from monitors ------------------------------------------------- 11.73s 2025-06-02 00:54:09.085937 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.96s 2025-06-02 00:54:09.085947 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.85s 2025-06-02 00:54:09.085957 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.62s 2025-06-02 00:54:09.085967 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.89s 2025-06-02 00:54:09.085976 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.80s 2025-06-02 00:54:09.085986 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.74s 2025-06-02 00:54:09.085995 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.68s 2025-06-02 00:54:09.086005 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.64s 2025-06-02 00:54:09.086059 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.63s 2025-06-02 00:54:09.086072 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.62s 2025-06-02 00:54:09.086082 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.60s 2025-06-02 00:54:09.086092 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.58s 2025-06-02 00:54:09.086101 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.55s 2025-06-02 00:54:09.086111 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.54s 2025-06-02 00:54:09.086121 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.52s 2025-06-02 00:54:09.086131 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.50s 2025-06-02 00:54:09.086140 | orchestrator | 2025-06-02 00:54:09 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:09.086151 | orchestrator | 2025-06-02 00:54:09 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:09.086161 | orchestrator | 2025-06-02 00:54:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:12.124853 | orchestrator | 2025-06-02 00:54:12 | INFO  | Task ff356bc4-2fd9-4077-b71b-a09522ad2f2c is in state STARTED 2025-06-02 00:54:12.126171 | orchestrator | 2025-06-02 00:54:12 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:12.127822 | orchestrator | 2025-06-02 00:54:12 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:12.127888 | orchestrator | 2025-06-02 00:54:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:15.174315 | orchestrator | 2025-06-02 00:54:15 | INFO  | Task ff356bc4-2fd9-4077-b71b-a09522ad2f2c is in state STARTED 2025-06-02 00:54:15.181917 | orchestrator | 2025-06-02 00:54:15 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:15.185164 | orchestrator | 2025-06-02 00:54:15 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:15.185524 | orchestrator | 2025-06-02 00:54:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:18.246549 | orchestrator | 2025-06-02 00:54:18 | INFO  | Task ff356bc4-2fd9-4077-b71b-a09522ad2f2c is in state STARTED 2025-06-02 00:54:18.248016 | orchestrator | 2025-06-02 00:54:18 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:18.249995 | orchestrator | 2025-06-02 00:54:18 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:18.250088 | orchestrator | 2025-06-02 00:54:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:21.302009 | orchestrator | 2025-06-02 00:54:21 | INFO  | Task ff356bc4-2fd9-4077-b71b-a09522ad2f2c is in state STARTED 2025-06-02 00:54:21.302371 | orchestrator | 2025-06-02 00:54:21 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:21.304009 | orchestrator | 2025-06-02 00:54:21 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:21.304033 | orchestrator | 2025-06-02 00:54:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:24.357573 | orchestrator | 2025-06-02 00:54:24 | INFO  | Task ff356bc4-2fd9-4077-b71b-a09522ad2f2c is in state STARTED 2025-06-02 00:54:24.359985 | orchestrator | 2025-06-02 00:54:24 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:24.362682 | orchestrator | 2025-06-02 00:54:24 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:24.363084 | orchestrator | 2025-06-02 00:54:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:27.409292 | orchestrator | 2025-06-02 00:54:27 | INFO  | Task ff356bc4-2fd9-4077-b71b-a09522ad2f2c is in state STARTED 2025-06-02 00:54:27.410186 | orchestrator | 2025-06-02 00:54:27 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:27.412128 | orchestrator | 2025-06-02 00:54:27 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:27.412223 | orchestrator | 2025-06-02 00:54:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:30.462788 | orchestrator | 2025-06-02 00:54:30 | INFO  | Task ff356bc4-2fd9-4077-b71b-a09522ad2f2c is in state STARTED 2025-06-02 00:54:30.463557 | orchestrator | 2025-06-02 00:54:30 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:30.466165 | orchestrator | 2025-06-02 00:54:30 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:30.466189 | orchestrator | 2025-06-02 00:54:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:33.532257 | orchestrator | 2025-06-02 00:54:33 | INFO  | Task ff356bc4-2fd9-4077-b71b-a09522ad2f2c is in state STARTED 2025-06-02 00:54:33.534143 | orchestrator | 2025-06-02 00:54:33 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:33.535962 | orchestrator | 2025-06-02 00:54:33 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:33.536028 | orchestrator | 2025-06-02 00:54:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:36.612343 | orchestrator | 2025-06-02 00:54:36 | INFO  | Task ff356bc4-2fd9-4077-b71b-a09522ad2f2c is in state SUCCESS 2025-06-02 00:54:36.614345 | orchestrator | 2025-06-02 00:54:36 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:36.615916 | orchestrator | 2025-06-02 00:54:36 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:36.618123 | orchestrator | 2025-06-02 00:54:36 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:54:36.618173 | orchestrator | 2025-06-02 00:54:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:39.699854 | orchestrator | 2025-06-02 00:54:39 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:39.701642 | orchestrator | 2025-06-02 00:54:39 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:39.702353 | orchestrator | 2025-06-02 00:54:39 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:54:39.702633 | orchestrator | 2025-06-02 00:54:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:42.768055 | orchestrator | 2025-06-02 00:54:42 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:42.768367 | orchestrator | 2025-06-02 00:54:42 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:42.769347 | orchestrator | 2025-06-02 00:54:42 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:54:42.769447 | orchestrator | 2025-06-02 00:54:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:45.812645 | orchestrator | 2025-06-02 00:54:45 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:45.813499 | orchestrator | 2025-06-02 00:54:45 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:45.815024 | orchestrator | 2025-06-02 00:54:45 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:54:45.815327 | orchestrator | 2025-06-02 00:54:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:48.870848 | orchestrator | 2025-06-02 00:54:48 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:48.871756 | orchestrator | 2025-06-02 00:54:48 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:48.873078 | orchestrator | 2025-06-02 00:54:48 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:54:48.873103 | orchestrator | 2025-06-02 00:54:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:51.927148 | orchestrator | 2025-06-02 00:54:51 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:51.929486 | orchestrator | 2025-06-02 00:54:51 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state STARTED 2025-06-02 00:54:51.932251 | orchestrator | 2025-06-02 00:54:51 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:54:51.932286 | orchestrator | 2025-06-02 00:54:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:54.963777 | orchestrator | 2025-06-02 00:54:54 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:54.964922 | orchestrator | 2025-06-02 00:54:54 | INFO  | Task 559b296b-ddc0-4b80-aedb-1ff2a98af31d is in state SUCCESS 2025-06-02 00:54:54.967071 | orchestrator | 2025-06-02 00:54:54.967160 | orchestrator | 2025-06-02 00:54:54.967216 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-02 00:54:54.967238 | orchestrator | 2025-06-02 00:54:54.967254 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-02 00:54:54.967265 | orchestrator | Monday 02 June 2025 00:54:09 +0000 (0:00:00.146) 0:00:00.146 *********** 2025-06-02 00:54:54.967277 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-02 00:54:54.967290 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 00:54:54.967301 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 00:54:54.967312 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 00:54:54.967347 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 00:54:54.967360 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-02 00:54:54.967371 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-02 00:54:54.967381 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-02 00:54:54.967392 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-02 00:54:54.967403 | orchestrator | 2025-06-02 00:54:54.967414 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-02 00:54:54.967425 | orchestrator | Monday 02 June 2025 00:54:14 +0000 (0:00:04.092) 0:00:04.239 *********** 2025-06-02 00:54:54.967437 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 00:54:54.967449 | orchestrator | 2025-06-02 00:54:54.967460 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-02 00:54:54.967470 | orchestrator | Monday 02 June 2025 00:54:14 +0000 (0:00:00.933) 0:00:05.172 *********** 2025-06-02 00:54:54.967481 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-02 00:54:54.967493 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 00:54:54.967504 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 00:54:54.967530 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 00:54:54.967542 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 00:54:54.967553 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-02 00:54:54.967564 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-02 00:54:54.967575 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-02 00:54:54.967585 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-02 00:54:54.967596 | orchestrator | 2025-06-02 00:54:54.967607 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-02 00:54:54.967618 | orchestrator | Monday 02 June 2025 00:54:27 +0000 (0:00:12.494) 0:00:17.667 *********** 2025-06-02 00:54:54.967630 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-02 00:54:54.967641 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 00:54:54.967653 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 00:54:54.967666 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 00:54:54.967702 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 00:54:54.967715 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-02 00:54:54.967728 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-02 00:54:54.967854 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-02 00:54:54.967869 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-02 00:54:54.967883 | orchestrator | 2025-06-02 00:54:54.967896 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:54:54.967966 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:54:54.967981 | orchestrator | 2025-06-02 00:54:54.967994 | orchestrator | 2025-06-02 00:54:54.968007 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:54:54.968018 | orchestrator | Monday 02 June 2025 00:54:33 +0000 (0:00:06.432) 0:00:24.099 *********** 2025-06-02 00:54:54.968029 | orchestrator | =============================================================================== 2025-06-02 00:54:54.968039 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.49s 2025-06-02 00:54:54.968050 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.43s 2025-06-02 00:54:54.968061 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.09s 2025-06-02 00:54:54.968071 | orchestrator | Create share directory -------------------------------------------------- 0.93s 2025-06-02 00:54:54.968082 | orchestrator | 2025-06-02 00:54:54.968093 | orchestrator | 2025-06-02 00:54:54.968104 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:54:54.968114 | orchestrator | 2025-06-02 00:54:54.968141 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:54:54.968153 | orchestrator | Monday 02 June 2025 00:53:09 +0000 (0:00:00.252) 0:00:00.252 *********** 2025-06-02 00:54:54.968164 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:54:54.968176 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:54:54.968188 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:54:54.968199 | orchestrator | 2025-06-02 00:54:54.968210 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:54:54.968221 | orchestrator | Monday 02 June 2025 00:53:10 +0000 (0:00:00.292) 0:00:00.544 *********** 2025-06-02 00:54:54.968232 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-02 00:54:54.968243 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-02 00:54:54.968254 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-02 00:54:54.968265 | orchestrator | 2025-06-02 00:54:54.968275 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-02 00:54:54.968286 | orchestrator | 2025-06-02 00:54:54.968297 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 00:54:54.968308 | orchestrator | Monday 02 June 2025 00:53:10 +0000 (0:00:00.388) 0:00:00.932 *********** 2025-06-02 00:54:54.968318 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:54:54.968329 | orchestrator | 2025-06-02 00:54:54.968340 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-02 00:54:54.968351 | orchestrator | Monday 02 June 2025 00:53:11 +0000 (0:00:00.478) 0:00:01.411 *********** 2025-06-02 00:54:54.968376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 00:54:54.968412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 00:54:54.968433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 00:54:54.968453 | orchestrator | 2025-06-02 00:54:54.968465 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-02 00:54:54.968476 | orchestrator | Monday 02 June 2025 00:53:11 +0000 (0:00:00.908) 0:00:02.320 *********** 2025-06-02 00:54:54.968487 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:54:54.968498 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:54:54.968509 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:54:54.968520 | orchestrator | 2025-06-02 00:54:54.968532 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 00:54:54.968542 | orchestrator | Monday 02 June 2025 00:53:12 +0000 (0:00:00.398) 0:00:02.718 *********** 2025-06-02 00:54:54.968553 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 00:54:54.968570 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 00:54:54.968582 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 00:54:54.968593 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 00:54:54.968604 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 00:54:54.968615 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 00:54:54.968625 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 00:54:54.968636 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 00:54:54.968647 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-02 00:54:54.968658 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 00:54:54.968686 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 00:54:54.968697 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 00:54:54.968708 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 00:54:54.968719 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 00:54:54.968730 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-02 00:54:54.968748 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 00:54:54.968758 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 00:54:54.968769 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 00:54:54.968780 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 00:54:54.968790 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 00:54:54.968801 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 00:54:54.968812 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 00:54:54.968827 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-02 00:54:54.968838 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 00:54:54.968850 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-02 00:54:54.968863 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-02 00:54:54.968874 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-02 00:54:54.968885 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-02 00:54:54.968896 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-02 00:54:54.968907 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-02 00:54:54.968918 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-02 00:54:54.968929 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-02 00:54:54.968940 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-02 00:54:54.968950 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-02 00:54:54.968961 | orchestrator | 2025-06-02 00:54:54.968972 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 00:54:54.968983 | orchestrator | Monday 02 June 2025 00:53:13 +0000 (0:00:00.681) 0:00:03.399 *********** 2025-06-02 00:54:54.968994 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:54:54.969005 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:54:54.969016 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:54:54.969027 | orchestrator | 2025-06-02 00:54:54.969038 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 00:54:54.969049 | orchestrator | Monday 02 June 2025 00:53:13 +0000 (0:00:00.310) 0:00:03.710 *********** 2025-06-02 00:54:54.969060 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.969071 | orchestrator | 2025-06-02 00:54:54.969087 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 00:54:54.969099 | orchestrator | Monday 02 June 2025 00:53:13 +0000 (0:00:00.137) 0:00:03.848 *********** 2025-06-02 00:54:54.969110 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.969127 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.969138 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.969149 | orchestrator | 2025-06-02 00:54:54.969160 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 00:54:54.969171 | orchestrator | Monday 02 June 2025 00:53:13 +0000 (0:00:00.460) 0:00:04.309 *********** 2025-06-02 00:54:54.969181 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:54:54.969193 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:54:54.969203 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:54:54.969214 | orchestrator | 2025-06-02 00:54:54.969225 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 00:54:54.969236 | orchestrator | Monday 02 June 2025 00:53:14 +0000 (0:00:00.317) 0:00:04.626 *********** 2025-06-02 00:54:54.969246 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.969257 | orchestrator | 2025-06-02 00:54:54.969268 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 00:54:54.969278 | orchestrator | Monday 02 June 2025 00:53:14 +0000 (0:00:00.126) 0:00:04.752 *********** 2025-06-02 00:54:54.969289 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.969300 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.969311 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.969322 | orchestrator | 2025-06-02 00:54:54.969333 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 00:54:54.969344 | orchestrator | Monday 02 June 2025 00:53:14 +0000 (0:00:00.271) 0:00:05.024 *********** 2025-06-02 00:54:54.969354 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:54:54.969365 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:54:54.969376 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:54:54.969387 | orchestrator | 2025-06-02 00:54:54.969398 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 00:54:54.969409 | orchestrator | Monday 02 June 2025 00:53:14 +0000 (0:00:00.266) 0:00:05.291 *********** 2025-06-02 00:54:54.969420 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.969430 | orchestrator | 2025-06-02 00:54:54.969441 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 00:54:54.969452 | orchestrator | Monday 02 June 2025 00:53:15 +0000 (0:00:00.307) 0:00:05.598 *********** 2025-06-02 00:54:54.969463 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.969474 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.969485 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.969496 | orchestrator | 2025-06-02 00:54:54.969506 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 00:54:54.969522 | orchestrator | Monday 02 June 2025 00:53:15 +0000 (0:00:00.329) 0:00:05.928 *********** 2025-06-02 00:54:54.969534 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:54:54.969545 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:54:54.969556 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:54:54.969566 | orchestrator | 2025-06-02 00:54:54.969577 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 00:54:54.969588 | orchestrator | Monday 02 June 2025 00:53:15 +0000 (0:00:00.283) 0:00:06.211 *********** 2025-06-02 00:54:54.969599 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.969609 | orchestrator | 2025-06-02 00:54:54.969620 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 00:54:54.969631 | orchestrator | Monday 02 June 2025 00:53:15 +0000 (0:00:00.124) 0:00:06.335 *********** 2025-06-02 00:54:54.969642 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.969653 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.969664 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.969709 | orchestrator | 2025-06-02 00:54:54.969720 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 00:54:54.969731 | orchestrator | Monday 02 June 2025 00:53:16 +0000 (0:00:00.375) 0:00:06.710 *********** 2025-06-02 00:54:54.969742 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:54:54.969753 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:54:54.969772 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:54:54.969783 | orchestrator | 2025-06-02 00:54:54.969794 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 00:54:54.969805 | orchestrator | Monday 02 June 2025 00:53:16 +0000 (0:00:00.457) 0:00:07.168 *********** 2025-06-02 00:54:54.969816 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.969826 | orchestrator | 2025-06-02 00:54:54.969837 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 00:54:54.969848 | orchestrator | Monday 02 June 2025 00:53:16 +0000 (0:00:00.134) 0:00:07.303 *********** 2025-06-02 00:54:54.969859 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.969869 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.969881 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.969891 | orchestrator | 2025-06-02 00:54:54.969902 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 00:54:54.969913 | orchestrator | Monday 02 June 2025 00:53:17 +0000 (0:00:00.273) 0:00:07.577 *********** 2025-06-02 00:54:54.969924 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:54:54.969935 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:54:54.969946 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:54:54.969956 | orchestrator | 2025-06-02 00:54:54.969967 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 00:54:54.969978 | orchestrator | Monday 02 June 2025 00:53:17 +0000 (0:00:00.332) 0:00:07.909 *********** 2025-06-02 00:54:54.969989 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.970000 | orchestrator | 2025-06-02 00:54:54.970011 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 00:54:54.970069 | orchestrator | Monday 02 June 2025 00:53:17 +0000 (0:00:00.128) 0:00:08.038 *********** 2025-06-02 00:54:54.970081 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.970092 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.970103 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.970114 | orchestrator | 2025-06-02 00:54:54.970125 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 00:54:54.970136 | orchestrator | Monday 02 June 2025 00:53:18 +0000 (0:00:00.452) 0:00:08.490 *********** 2025-06-02 00:54:54.970147 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:54:54.970165 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:54:54.970176 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:54:54.970187 | orchestrator | 2025-06-02 00:54:54.970198 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 00:54:54.970209 | orchestrator | Monday 02 June 2025 00:53:18 +0000 (0:00:00.311) 0:00:08.801 *********** 2025-06-02 00:54:54.970220 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.970230 | orchestrator | 2025-06-02 00:54:54.970241 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 00:54:54.970252 | orchestrator | Monday 02 June 2025 00:53:18 +0000 (0:00:00.146) 0:00:08.948 *********** 2025-06-02 00:54:54.970263 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.970274 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.970285 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.970296 | orchestrator | 2025-06-02 00:54:54.970307 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 00:54:54.970317 | orchestrator | Monday 02 June 2025 00:53:18 +0000 (0:00:00.289) 0:00:09.237 *********** 2025-06-02 00:54:54.970328 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:54:54.970339 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:54:54.970350 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:54:54.970361 | orchestrator | 2025-06-02 00:54:54.970372 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 00:54:54.970383 | orchestrator | Monday 02 June 2025 00:53:19 +0000 (0:00:00.281) 0:00:09.519 *********** 2025-06-02 00:54:54.970394 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.970404 | orchestrator | 2025-06-02 00:54:54.970415 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 00:54:54.970433 | orchestrator | Monday 02 June 2025 00:53:19 +0000 (0:00:00.122) 0:00:09.642 *********** 2025-06-02 00:54:54.970444 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.970455 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.970467 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.970477 | orchestrator | 2025-06-02 00:54:54.970488 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 00:54:54.970499 | orchestrator | Monday 02 June 2025 00:53:19 +0000 (0:00:00.438) 0:00:10.080 *********** 2025-06-02 00:54:54.970510 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:54:54.970521 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:54:54.970532 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:54:54.970543 | orchestrator | 2025-06-02 00:54:54.970554 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 00:54:54.970565 | orchestrator | Monday 02 June 2025 00:53:19 +0000 (0:00:00.290) 0:00:10.370 *********** 2025-06-02 00:54:54.970576 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.970586 | orchestrator | 2025-06-02 00:54:54.970597 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 00:54:54.970614 | orchestrator | Monday 02 June 2025 00:53:20 +0000 (0:00:00.126) 0:00:10.497 *********** 2025-06-02 00:54:54.970625 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.970636 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.970647 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.970658 | orchestrator | 2025-06-02 00:54:54.970686 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 00:54:54.970698 | orchestrator | Monday 02 June 2025 00:53:20 +0000 (0:00:00.289) 0:00:10.787 *********** 2025-06-02 00:54:54.970709 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:54:54.970720 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:54:54.970732 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:54:54.970742 | orchestrator | 2025-06-02 00:54:54.970753 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 00:54:54.970764 | orchestrator | Monday 02 June 2025 00:53:20 +0000 (0:00:00.456) 0:00:11.244 *********** 2025-06-02 00:54:54.970775 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.970786 | orchestrator | 2025-06-02 00:54:54.970797 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 00:54:54.970808 | orchestrator | Monday 02 June 2025 00:53:21 +0000 (0:00:00.133) 0:00:11.377 *********** 2025-06-02 00:54:54.970819 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.970830 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.970841 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.970851 | orchestrator | 2025-06-02 00:54:54.970862 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-02 00:54:54.970873 | orchestrator | Monday 02 June 2025 00:53:21 +0000 (0:00:00.290) 0:00:11.667 *********** 2025-06-02 00:54:54.970884 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:54:54.970895 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:54:54.970906 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:54:54.970917 | orchestrator | 2025-06-02 00:54:54.970928 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-02 00:54:54.970938 | orchestrator | Monday 02 June 2025 00:53:22 +0000 (0:00:01.545) 0:00:13.212 *********** 2025-06-02 00:54:54.970949 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 00:54:54.970960 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 00:54:54.970971 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 00:54:54.970982 | orchestrator | 2025-06-02 00:54:54.970993 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-02 00:54:54.971003 | orchestrator | Monday 02 June 2025 00:53:24 +0000 (0:00:01.815) 0:00:15.028 *********** 2025-06-02 00:54:54.971014 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 00:54:54.971038 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 00:54:54.971049 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 00:54:54.971060 | orchestrator | 2025-06-02 00:54:54.971071 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-02 00:54:54.971087 | orchestrator | Monday 02 June 2025 00:53:26 +0000 (0:00:02.085) 0:00:17.113 *********** 2025-06-02 00:54:54.971098 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 00:54:54.971109 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 00:54:54.971120 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 00:54:54.971131 | orchestrator | 2025-06-02 00:54:54.971142 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-02 00:54:54.971153 | orchestrator | Monday 02 June 2025 00:53:28 +0000 (0:00:01.692) 0:00:18.806 *********** 2025-06-02 00:54:54.971163 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.971174 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.971185 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.971196 | orchestrator | 2025-06-02 00:54:54.971207 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-02 00:54:54.971218 | orchestrator | Monday 02 June 2025 00:53:28 +0000 (0:00:00.270) 0:00:19.076 *********** 2025-06-02 00:54:54.971229 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.971240 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.971251 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.971262 | orchestrator | 2025-06-02 00:54:54.971272 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 00:54:54.971283 | orchestrator | Monday 02 June 2025 00:53:28 +0000 (0:00:00.277) 0:00:19.354 *********** 2025-06-02 00:54:54.971294 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:54:54.971305 | orchestrator | 2025-06-02 00:54:54.971316 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-02 00:54:54.971326 | orchestrator | Monday 02 June 2025 00:53:29 +0000 (0:00:00.762) 0:00:20.116 *********** 2025-06-02 00:54:54.971345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 00:54:54.971375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 00:54:54.971394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 00:54:54.971413 | orchestrator | 2025-06-02 00:54:54.971424 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-02 00:54:54.971435 | orchestrator | Monday 02 June 2025 00:53:31 +0000 (0:00:01.413) 0:00:21.530 *********** 2025-06-02 00:54:54.971461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 00:54:54.971474 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.971493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 00:54:54.971511 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.971530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 00:54:54.971542 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.971553 | orchestrator | 2025-06-02 00:54:54.971564 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-02 00:54:54.971581 | orchestrator | Monday 02 June 2025 00:53:31 +0000 (0:00:00.809) 0:00:22.340 *********** 2025-06-02 00:54:54.971601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 00:54:54.971613 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.971631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 00:54:54.971650 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.971693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 00:54:54.971706 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.971717 | orchestrator | 2025-06-02 00:54:54.971729 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-02 00:54:54.971740 | orchestrator | Monday 02 June 2025 00:53:32 +0000 (0:00:01.002) 0:00:23.343 *********** 2025-06-02 00:54:54.971758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 00:54:54.971787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 00:54:54.971807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 00:54:54.971825 | orchestrator | 2025-06-02 00:54:54.971837 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 00:54:54.971848 | orchestrator | Monday 02 June 2025 00:53:34 +0000 (0:00:01.210) 0:00:24.553 *********** 2025-06-02 00:54:54.971859 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:54:54.971870 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:54:54.971881 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:54:54.971892 | orchestrator | 2025-06-02 00:54:54.971903 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 00:54:54.971914 | orchestrator | Monday 02 June 2025 00:53:34 +0000 (0:00:00.361) 0:00:24.914 *********** 2025-06-02 00:54:54.971931 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:54:54.971943 | orchestrator | 2025-06-02 00:54:54.971954 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-02 00:54:54.971965 | orchestrator | Monday 02 June 2025 00:53:35 +0000 (0:00:00.821) 0:00:25.735 *********** 2025-06-02 00:54:54.971976 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:54:54.971987 | orchestrator | 2025-06-02 00:54:54.971998 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-02 00:54:54.972009 | orchestrator | Monday 02 June 2025 00:53:37 +0000 (0:00:02.118) 0:00:27.854 *********** 2025-06-02 00:54:54.972020 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:54:54.972030 | orchestrator | 2025-06-02 00:54:54.972041 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-02 00:54:54.972052 | orchestrator | Monday 02 June 2025 00:53:39 +0000 (0:00:01.998) 0:00:29.852 *********** 2025-06-02 00:54:54.972063 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:54:54.972074 | orchestrator | 2025-06-02 00:54:54.972085 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 00:54:54.972096 | orchestrator | Monday 02 June 2025 00:53:54 +0000 (0:00:14.834) 0:00:44.687 *********** 2025-06-02 00:54:54.972106 | orchestrator | 2025-06-02 00:54:54.972117 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 00:54:54.972128 | orchestrator | Monday 02 June 2025 00:53:54 +0000 (0:00:00.083) 0:00:44.770 *********** 2025-06-02 00:54:54.972139 | orchestrator | 2025-06-02 00:54:54.972150 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 00:54:54.972161 | orchestrator | Monday 02 June 2025 00:53:54 +0000 (0:00:00.063) 0:00:44.834 *********** 2025-06-02 00:54:54.972171 | orchestrator | 2025-06-02 00:54:54.972182 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-02 00:54:54.972205 | orchestrator | Monday 02 June 2025 00:53:54 +0000 (0:00:00.066) 0:00:44.900 *********** 2025-06-02 00:54:54.972215 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:54:54.972227 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:54:54.972238 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:54:54.972249 | orchestrator | 2025-06-02 00:54:54.972259 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:54:54.972271 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-02 00:54:54.972281 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 00:54:54.972297 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 00:54:54.972308 | orchestrator | 2025-06-02 00:54:54.972319 | orchestrator | 2025-06-02 00:54:54.972330 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:54:54.972341 | orchestrator | Monday 02 June 2025 00:54:52 +0000 (0:00:57.741) 0:01:42.642 *********** 2025-06-02 00:54:54.972351 | orchestrator | =============================================================================== 2025-06-02 00:54:54.972362 | orchestrator | horizon : Restart horizon container ------------------------------------ 57.74s 2025-06-02 00:54:54.972373 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.83s 2025-06-02 00:54:54.972384 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.12s 2025-06-02 00:54:54.972395 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.09s 2025-06-02 00:54:54.972406 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.00s 2025-06-02 00:54:54.972416 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.82s 2025-06-02 00:54:54.972427 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.69s 2025-06-02 00:54:54.972438 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.55s 2025-06-02 00:54:54.972449 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.41s 2025-06-02 00:54:54.972460 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.21s 2025-06-02 00:54:54.972471 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.00s 2025-06-02 00:54:54.972482 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.91s 2025-06-02 00:54:54.972493 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.82s 2025-06-02 00:54:54.972503 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.81s 2025-06-02 00:54:54.972514 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2025-06-02 00:54:54.972525 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2025-06-02 00:54:54.972536 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.48s 2025-06-02 00:54:54.972547 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.46s 2025-06-02 00:54:54.972558 | orchestrator | horizon : Update policy file name --------------------------------------- 0.46s 2025-06-02 00:54:54.972569 | orchestrator | horizon : Update policy file name --------------------------------------- 0.46s 2025-06-02 00:54:54.972580 | orchestrator | 2025-06-02 00:54:54 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:54:54.972591 | orchestrator | 2025-06-02 00:54:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:54:58.001041 | orchestrator | 2025-06-02 00:54:57 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:54:58.002437 | orchestrator | 2025-06-02 00:54:58 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:54:58.003514 | orchestrator | 2025-06-02 00:54:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:01.041566 | orchestrator | 2025-06-02 00:55:01 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:01.043003 | orchestrator | 2025-06-02 00:55:01 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:55:01.043020 | orchestrator | 2025-06-02 00:55:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:04.082737 | orchestrator | 2025-06-02 00:55:04 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:04.084133 | orchestrator | 2025-06-02 00:55:04 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:55:04.084421 | orchestrator | 2025-06-02 00:55:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:07.128307 | orchestrator | 2025-06-02 00:55:07 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:07.130618 | orchestrator | 2025-06-02 00:55:07 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:55:07.130954 | orchestrator | 2025-06-02 00:55:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:10.175487 | orchestrator | 2025-06-02 00:55:10 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:10.177698 | orchestrator | 2025-06-02 00:55:10 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:55:10.177941 | orchestrator | 2025-06-02 00:55:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:13.216831 | orchestrator | 2025-06-02 00:55:13 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:13.218280 | orchestrator | 2025-06-02 00:55:13 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:55:13.218333 | orchestrator | 2025-06-02 00:55:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:16.264164 | orchestrator | 2025-06-02 00:55:16 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:16.265333 | orchestrator | 2025-06-02 00:55:16 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:55:16.265380 | orchestrator | 2025-06-02 00:55:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:19.307090 | orchestrator | 2025-06-02 00:55:19 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:19.307828 | orchestrator | 2025-06-02 00:55:19 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:55:19.307861 | orchestrator | 2025-06-02 00:55:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:22.361096 | orchestrator | 2025-06-02 00:55:22 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:22.362218 | orchestrator | 2025-06-02 00:55:22 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:55:22.362257 | orchestrator | 2025-06-02 00:55:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:25.400862 | orchestrator | 2025-06-02 00:55:25 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:25.402154 | orchestrator | 2025-06-02 00:55:25 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:55:25.402209 | orchestrator | 2025-06-02 00:55:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:28.444194 | orchestrator | 2025-06-02 00:55:28 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:28.445968 | orchestrator | 2025-06-02 00:55:28 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state STARTED 2025-06-02 00:55:28.446011 | orchestrator | 2025-06-02 00:55:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:31.507136 | orchestrator | 2025-06-02 00:55:31 | INFO  | Task c806582b-a1d4-4bd9-a99f-6505b3597ac7 is in state STARTED 2025-06-02 00:55:31.507245 | orchestrator | 2025-06-02 00:55:31 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:31.507261 | orchestrator | 2025-06-02 00:55:31 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:55:31.507273 | orchestrator | 2025-06-02 00:55:31 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:55:31.507284 | orchestrator | 2025-06-02 00:55:31 | INFO  | Task 27c1ed68-f957-4e41-8270-a1358fbf8e78 is in state SUCCESS 2025-06-02 00:55:31.507296 | orchestrator | 2025-06-02 00:55:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:34.570529 | orchestrator | 2025-06-02 00:55:34 | INFO  | Task c806582b-a1d4-4bd9-a99f-6505b3597ac7 is in state SUCCESS 2025-06-02 00:55:34.570791 | orchestrator | 2025-06-02 00:55:34 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:34.571707 | orchestrator | 2025-06-02 00:55:34 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:55:34.575039 | orchestrator | 2025-06-02 00:55:34 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:55:34.575163 | orchestrator | 2025-06-02 00:55:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:37.623344 | orchestrator | 2025-06-02 00:55:37 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:55:37.624068 | orchestrator | 2025-06-02 00:55:37 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:37.625321 | orchestrator | 2025-06-02 00:55:37 | INFO  | Task 3d2892ee-c419-4d78-99d3-36f7fa33a79a is in state STARTED 2025-06-02 00:55:37.626117 | orchestrator | 2025-06-02 00:55:37 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:55:37.627721 | orchestrator | 2025-06-02 00:55:37 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:55:37.627748 | orchestrator | 2025-06-02 00:55:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:40.654803 | orchestrator | 2025-06-02 00:55:40 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:55:40.657348 | orchestrator | 2025-06-02 00:55:40 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:40.657728 | orchestrator | 2025-06-02 00:55:40 | INFO  | Task 3d2892ee-c419-4d78-99d3-36f7fa33a79a is in state STARTED 2025-06-02 00:55:40.658405 | orchestrator | 2025-06-02 00:55:40 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:55:40.659100 | orchestrator | 2025-06-02 00:55:40 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:55:40.659125 | orchestrator | 2025-06-02 00:55:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:43.696500 | orchestrator | 2025-06-02 00:55:43 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:55:43.698881 | orchestrator | 2025-06-02 00:55:43 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:43.700610 | orchestrator | 2025-06-02 00:55:43 | INFO  | Task 3d2892ee-c419-4d78-99d3-36f7fa33a79a is in state STARTED 2025-06-02 00:55:43.703819 | orchestrator | 2025-06-02 00:55:43 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:55:43.705174 | orchestrator | 2025-06-02 00:55:43 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:55:43.706079 | orchestrator | 2025-06-02 00:55:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:46.744905 | orchestrator | 2025-06-02 00:55:46 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:55:46.746292 | orchestrator | 2025-06-02 00:55:46 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state STARTED 2025-06-02 00:55:46.748003 | orchestrator | 2025-06-02 00:55:46 | INFO  | Task 3d2892ee-c419-4d78-99d3-36f7fa33a79a is in state STARTED 2025-06-02 00:55:46.749507 | orchestrator | 2025-06-02 00:55:46 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:55:46.751147 | orchestrator | 2025-06-02 00:55:46 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:55:46.751195 | orchestrator | 2025-06-02 00:55:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:49.786198 | orchestrator | 2025-06-02 00:55:49 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:55:49.791601 | orchestrator | 2025-06-02 00:55:49 | INFO  | Task aa83b60a-8294-4b25-bdbe-2113554c1e85 is in state SUCCESS 2025-06-02 00:55:49.794768 | orchestrator | 2025-06-02 00:55:49.794807 | orchestrator | 2025-06-02 00:55:49.794820 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-02 00:55:49.794832 | orchestrator | 2025-06-02 00:55:49.794843 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-02 00:55:49.794855 | orchestrator | Monday 02 June 2025 00:54:37 +0000 (0:00:00.190) 0:00:00.190 *********** 2025-06-02 00:55:49.794867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-02 00:55:49.794881 | orchestrator | 2025-06-02 00:55:49.794893 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-02 00:55:49.794904 | orchestrator | Monday 02 June 2025 00:54:38 +0000 (0:00:00.178) 0:00:00.369 *********** 2025-06-02 00:55:49.794916 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-02 00:55:49.794927 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-02 00:55:49.794938 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-02 00:55:49.794949 | orchestrator | 2025-06-02 00:55:49.794960 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-02 00:55:49.794971 | orchestrator | Monday 02 June 2025 00:54:39 +0000 (0:00:01.126) 0:00:01.495 *********** 2025-06-02 00:55:49.794982 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-02 00:55:49.794994 | orchestrator | 2025-06-02 00:55:49.795005 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-02 00:55:49.795016 | orchestrator | Monday 02 June 2025 00:54:40 +0000 (0:00:01.061) 0:00:02.557 *********** 2025-06-02 00:55:49.795027 | orchestrator | changed: [testbed-manager] 2025-06-02 00:55:49.795040 | orchestrator | 2025-06-02 00:55:49.795052 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-02 00:55:49.795063 | orchestrator | Monday 02 June 2025 00:54:41 +0000 (0:00:00.901) 0:00:03.458 *********** 2025-06-02 00:55:49.795074 | orchestrator | changed: [testbed-manager] 2025-06-02 00:55:49.795085 | orchestrator | 2025-06-02 00:55:49.795096 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-02 00:55:49.795107 | orchestrator | Monday 02 June 2025 00:54:41 +0000 (0:00:00.727) 0:00:04.186 *********** 2025-06-02 00:55:49.795143 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-02 00:55:49.795155 | orchestrator | ok: [testbed-manager] 2025-06-02 00:55:49.795166 | orchestrator | 2025-06-02 00:55:49.795177 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-02 00:55:49.795187 | orchestrator | Monday 02 June 2025 00:55:19 +0000 (0:00:37.443) 0:00:41.629 *********** 2025-06-02 00:55:49.795198 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-02 00:55:49.795354 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-02 00:55:49.795369 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-02 00:55:49.795395 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-02 00:55:49.795409 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-02 00:55:49.795421 | orchestrator | 2025-06-02 00:55:49.795435 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-02 00:55:49.795448 | orchestrator | Monday 02 June 2025 00:55:23 +0000 (0:00:03.835) 0:00:45.465 *********** 2025-06-02 00:55:49.795460 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-02 00:55:49.795474 | orchestrator | 2025-06-02 00:55:49.795486 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-02 00:55:49.795499 | orchestrator | Monday 02 June 2025 00:55:23 +0000 (0:00:00.427) 0:00:45.893 *********** 2025-06-02 00:55:49.795512 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:55:49.795525 | orchestrator | 2025-06-02 00:55:49.795537 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-02 00:55:49.795550 | orchestrator | Monday 02 June 2025 00:55:23 +0000 (0:00:00.112) 0:00:46.005 *********** 2025-06-02 00:55:49.795563 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:55:49.795576 | orchestrator | 2025-06-02 00:55:49.795588 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-02 00:55:49.795601 | orchestrator | Monday 02 June 2025 00:55:24 +0000 (0:00:00.282) 0:00:46.287 *********** 2025-06-02 00:55:49.795613 | orchestrator | changed: [testbed-manager] 2025-06-02 00:55:49.795650 | orchestrator | 2025-06-02 00:55:49.795663 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-02 00:55:49.795675 | orchestrator | Monday 02 June 2025 00:55:25 +0000 (0:00:01.411) 0:00:47.699 *********** 2025-06-02 00:55:49.795688 | orchestrator | changed: [testbed-manager] 2025-06-02 00:55:49.795701 | orchestrator | 2025-06-02 00:55:49.795712 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-02 00:55:49.795723 | orchestrator | Monday 02 June 2025 00:55:26 +0000 (0:00:00.867) 0:00:48.566 *********** 2025-06-02 00:55:49.795734 | orchestrator | changed: [testbed-manager] 2025-06-02 00:55:49.795744 | orchestrator | 2025-06-02 00:55:49.795755 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-02 00:55:49.795766 | orchestrator | Monday 02 June 2025 00:55:27 +0000 (0:00:00.628) 0:00:49.194 *********** 2025-06-02 00:55:49.795777 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-02 00:55:49.795788 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-02 00:55:49.795799 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-02 00:55:49.795811 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-02 00:55:49.795821 | orchestrator | 2025-06-02 00:55:49.795832 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:55:49.795843 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 00:55:49.795856 | orchestrator | 2025-06-02 00:55:49.795867 | orchestrator | 2025-06-02 00:55:49.795889 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:55:49.795901 | orchestrator | Monday 02 June 2025 00:55:28 +0000 (0:00:01.390) 0:00:50.584 *********** 2025-06-02 00:55:49.795912 | orchestrator | =============================================================================== 2025-06-02 00:55:49.795932 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.44s 2025-06-02 00:55:49.795943 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.84s 2025-06-02 00:55:49.795954 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.41s 2025-06-02 00:55:49.795965 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.39s 2025-06-02 00:55:49.795976 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.13s 2025-06-02 00:55:49.795987 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.06s 2025-06-02 00:55:49.795997 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.90s 2025-06-02 00:55:49.796008 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.87s 2025-06-02 00:55:49.796019 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.73s 2025-06-02 00:55:49.796030 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.63s 2025-06-02 00:55:49.796040 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2025-06-02 00:55:49.796051 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.28s 2025-06-02 00:55:49.796062 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.18s 2025-06-02 00:55:49.796073 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2025-06-02 00:55:49.796084 | orchestrator | 2025-06-02 00:55:49.796095 | orchestrator | 2025-06-02 00:55:49.796106 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:55:49.796116 | orchestrator | 2025-06-02 00:55:49.796127 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:55:49.796138 | orchestrator | Monday 02 June 2025 00:55:32 +0000 (0:00:00.176) 0:00:00.176 *********** 2025-06-02 00:55:49.796149 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:55:49.796160 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:55:49.796172 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:55:49.796182 | orchestrator | 2025-06-02 00:55:49.796193 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:55:49.796204 | orchestrator | Monday 02 June 2025 00:55:32 +0000 (0:00:00.262) 0:00:00.439 *********** 2025-06-02 00:55:49.796215 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 00:55:49.796226 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 00:55:49.796237 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 00:55:49.796248 | orchestrator | 2025-06-02 00:55:49.796259 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-02 00:55:49.796270 | orchestrator | 2025-06-02 00:55:49.796286 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-02 00:55:49.796297 | orchestrator | Monday 02 June 2025 00:55:33 +0000 (0:00:00.645) 0:00:01.084 *********** 2025-06-02 00:55:49.796308 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:55:49.796319 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:55:49.796330 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:55:49.796341 | orchestrator | 2025-06-02 00:55:49.796352 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:55:49.796363 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:55:49.796375 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:55:49.796386 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:55:49.796397 | orchestrator | 2025-06-02 00:55:49.796408 | orchestrator | 2025-06-02 00:55:49.796419 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:55:49.796430 | orchestrator | Monday 02 June 2025 00:55:34 +0000 (0:00:00.675) 0:00:01.760 *********** 2025-06-02 00:55:49.796447 | orchestrator | =============================================================================== 2025-06-02 00:55:49.796458 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.68s 2025-06-02 00:55:49.796469 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2025-06-02 00:55:49.796479 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-06-02 00:55:49.796490 | orchestrator | 2025-06-02 00:55:49.796501 | orchestrator | 2025-06-02 00:55:49.796512 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:55:49.796523 | orchestrator | 2025-06-02 00:55:49.796534 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:55:49.796544 | orchestrator | Monday 02 June 2025 00:53:10 +0000 (0:00:00.248) 0:00:00.248 *********** 2025-06-02 00:55:49.796555 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:55:49.796566 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:55:49.796592 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:55:49.796604 | orchestrator | 2025-06-02 00:55:49.796646 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:55:49.796659 | orchestrator | Monday 02 June 2025 00:53:10 +0000 (0:00:00.311) 0:00:00.559 *********** 2025-06-02 00:55:49.796670 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 00:55:49.796681 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 00:55:49.796692 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 00:55:49.796703 | orchestrator | 2025-06-02 00:55:49.796715 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-02 00:55:49.796726 | orchestrator | 2025-06-02 00:55:49.796743 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 00:55:49.796755 | orchestrator | Monday 02 June 2025 00:53:10 +0000 (0:00:00.419) 0:00:00.979 *********** 2025-06-02 00:55:49.796766 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:55:49.796777 | orchestrator | 2025-06-02 00:55:49.796788 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-02 00:55:49.796799 | orchestrator | Monday 02 June 2025 00:53:11 +0000 (0:00:00.548) 0:00:01.528 *********** 2025-06-02 00:55:49.796816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.796838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.796860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.796879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 00:55:49.796893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 00:55:49.796905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 00:55:49.796916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.796939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.796951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.796962 | orchestrator | 2025-06-02 00:55:49.796973 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-02 00:55:49.796984 | orchestrator | Monday 02 June 2025 00:53:12 +0000 (0:00:01.553) 0:00:03.082 *********** 2025-06-02 00:55:49.796995 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-02 00:55:49.797007 | orchestrator | 2025-06-02 00:55:49.797018 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-02 00:55:49.797028 | orchestrator | Monday 02 June 2025 00:53:13 +0000 (0:00:00.848) 0:00:03.931 *********** 2025-06-02 00:55:49.797039 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:55:49.797051 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:55:49.797062 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:55:49.797073 | orchestrator | 2025-06-02 00:55:49.797084 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-02 00:55:49.797094 | orchestrator | Monday 02 June 2025 00:53:14 +0000 (0:00:00.537) 0:00:04.468 *********** 2025-06-02 00:55:49.797105 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 00:55:49.797116 | orchestrator | 2025-06-02 00:55:49.797127 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 00:55:49.797143 | orchestrator | Monday 02 June 2025 00:53:14 +0000 (0:00:00.695) 0:00:05.163 *********** 2025-06-02 00:55:49.797155 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:55:49.797166 | orchestrator | 2025-06-02 00:55:49.797177 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-02 00:55:49.797187 | orchestrator | Monday 02 June 2025 00:53:15 +0000 (0:00:00.555) 0:00:05.719 *********** 2025-06-02 00:55:49.797199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.797223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.797236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.797249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 00:55:49.797268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 00:55:49.797280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 00:55:49.797298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.797314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.797329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.797347 | orchestrator | 2025-06-02 00:55:49.797367 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-02 00:55:49.797385 | orchestrator | Monday 02 June 2025 00:53:18 +0000 (0:00:03.303) 0:00:09.023 *********** 2025-06-02 00:55:49.797415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 00:55:49.797436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:55:49.797453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 00:55:49.797484 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.797519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 00:55:49.797533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:55:49.797544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 00:55:49.797556 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:55:49.797576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 00:55:49.797599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:55:49.797611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 00:55:49.797701 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:55:49.797769 | orchestrator | 2025-06-02 00:55:49.797786 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-02 00:55:49.797811 | orchestrator | Monday 02 June 2025 00:53:19 +0000 (0:00:00.559) 0:00:09.582 *********** 2025-06-02 00:55:49.797831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 00:55:49.797850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:55:49.798377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 00:55:49.798419 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.798431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 00:55:49.798442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:55:49.798460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 00:55:49.798470 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:55:49.798480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 00:55:49.798500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:55:49.798517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 00:55:49.798528 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:55:49.798537 | orchestrator | 2025-06-02 00:55:49.798547 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-02 00:55:49.798558 | orchestrator | Monday 02 June 2025 00:53:20 +0000 (0:00:00.723) 0:00:10.306 *********** 2025-06-02 00:55:49.798572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.798584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.798600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.798681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 00:55:49.798696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 00:55:49.798706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 00:55:49.798722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.798732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.798742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.798759 | orchestrator | 2025-06-02 00:55:49.798769 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-02 00:55:49.798800 | orchestrator | Monday 02 June 2025 00:53:23 +0000 (0:00:03.525) 0:00:13.832 *********** 2025-06-02 00:55:49.798819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.798831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:55:49.798846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.798858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:55:49.798874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.798888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:55:49.798897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.798905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.798918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.798926 | orchestrator | 2025-06-02 00:55:49.798934 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-02 00:55:49.798942 | orchestrator | Monday 02 June 2025 00:53:28 +0000 (0:00:05.001) 0:00:18.833 *********** 2025-06-02 00:55:49.798950 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:55:49.798959 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:55:49.798968 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:55:49.798977 | orchestrator | 2025-06-02 00:55:49.798987 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-02 00:55:49.798996 | orchestrator | Monday 02 June 2025 00:53:29 +0000 (0:00:01.269) 0:00:20.103 *********** 2025-06-02 00:55:49.799005 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.799015 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:55:49.799030 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:55:49.799040 | orchestrator | 2025-06-02 00:55:49.799049 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-02 00:55:49.799059 | orchestrator | Monday 02 June 2025 00:53:30 +0000 (0:00:00.649) 0:00:20.752 *********** 2025-06-02 00:55:49.799068 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.799078 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:55:49.799087 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:55:49.799097 | orchestrator | 2025-06-02 00:55:49.799107 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-02 00:55:49.799116 | orchestrator | Monday 02 June 2025 00:53:31 +0000 (0:00:00.554) 0:00:21.307 *********** 2025-06-02 00:55:49.799125 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.799134 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:55:49.799143 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:55:49.799152 | orchestrator | 2025-06-02 00:55:49.799162 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-02 00:55:49.799171 | orchestrator | Monday 02 June 2025 00:53:31 +0000 (0:00:00.287) 0:00:21.594 *********** 2025-06-02 00:55:49.799186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.799197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:55:49.799210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.799219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:55:49.799240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.799250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 00:55:49.799258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.799267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.799279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.799292 | orchestrator | 2025-06-02 00:55:49.799300 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 00:55:49.799308 | orchestrator | Monday 02 June 2025 00:53:33 +0000 (0:00:02.273) 0:00:23.868 *********** 2025-06-02 00:55:49.799316 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.799324 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:55:49.799332 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:55:49.799340 | orchestrator | 2025-06-02 00:55:49.799348 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-02 00:55:49.799356 | orchestrator | Monday 02 June 2025 00:53:33 +0000 (0:00:00.275) 0:00:24.144 *********** 2025-06-02 00:55:49.799364 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 00:55:49.799372 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 00:55:49.799382 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 00:55:49.799395 | orchestrator | 2025-06-02 00:55:49.799408 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-02 00:55:49.799421 | orchestrator | Monday 02 June 2025 00:53:35 +0000 (0:00:01.841) 0:00:25.986 *********** 2025-06-02 00:55:49.799434 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 00:55:49.799445 | orchestrator | 2025-06-02 00:55:49.799453 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-02 00:55:49.799460 | orchestrator | Monday 02 June 2025 00:53:36 +0000 (0:00:00.861) 0:00:26.848 *********** 2025-06-02 00:55:49.799468 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.799515 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:55:49.799525 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:55:49.799533 | orchestrator | 2025-06-02 00:55:49.799541 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-02 00:55:49.799549 | orchestrator | Monday 02 June 2025 00:53:37 +0000 (0:00:00.525) 0:00:27.374 *********** 2025-06-02 00:55:49.799557 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 00:55:49.799571 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 00:55:49.799579 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 00:55:49.799587 | orchestrator | 2025-06-02 00:55:49.799595 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-02 00:55:49.799603 | orchestrator | Monday 02 June 2025 00:53:38 +0000 (0:00:01.021) 0:00:28.395 *********** 2025-06-02 00:55:49.799611 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:55:49.799640 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:55:49.799649 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:55:49.799657 | orchestrator | 2025-06-02 00:55:49.799665 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-02 00:55:49.799672 | orchestrator | Monday 02 June 2025 00:53:38 +0000 (0:00:00.282) 0:00:28.678 *********** 2025-06-02 00:55:49.799680 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 00:55:49.799688 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 00:55:49.799696 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 00:55:49.799703 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 00:55:49.799711 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 00:55:49.799719 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 00:55:49.799727 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 00:55:49.799742 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 00:55:49.799750 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 00:55:49.799758 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 00:55:49.799765 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 00:55:49.799773 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 00:55:49.799781 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 00:55:49.799789 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 00:55:49.799797 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 00:55:49.799809 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 00:55:49.799817 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 00:55:49.799825 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 00:55:49.799832 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 00:55:49.799840 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 00:55:49.799848 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 00:55:49.799856 | orchestrator | 2025-06-02 00:55:49.799863 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-02 00:55:49.799871 | orchestrator | Monday 02 June 2025 00:53:47 +0000 (0:00:08.557) 0:00:37.236 *********** 2025-06-02 00:55:49.799879 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 00:55:49.799886 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 00:55:49.799894 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 00:55:49.799902 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 00:55:49.799910 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 00:55:49.799917 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 00:55:49.799925 | orchestrator | 2025-06-02 00:55:49.799933 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-02 00:55:49.799941 | orchestrator | Monday 02 June 2025 00:53:49 +0000 (0:00:02.466) 0:00:39.702 *********** 2025-06-02 00:55:49.799955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.799971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.799984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 00:55:49.799993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 00:55:49.800002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 00:55:49.800017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 00:55:49.800031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.800039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.800134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 00:55:49.800144 | orchestrator | 2025-06-02 00:55:49.800152 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 00:55:49.800160 | orchestrator | Monday 02 June 2025 00:53:51 +0000 (0:00:02.180) 0:00:41.882 *********** 2025-06-02 00:55:49.800168 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.800176 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:55:49.800184 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:55:49.800192 | orchestrator | 2025-06-02 00:55:49.800200 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-02 00:55:49.800208 | orchestrator | Monday 02 June 2025 00:53:51 +0000 (0:00:00.278) 0:00:42.161 *********** 2025-06-02 00:55:49.800216 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:55:49.800224 | orchestrator | 2025-06-02 00:55:49.800232 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-02 00:55:49.800240 | orchestrator | Monday 02 June 2025 00:53:54 +0000 (0:00:02.147) 0:00:44.309 *********** 2025-06-02 00:55:49.800248 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:55:49.800256 | orchestrator | 2025-06-02 00:55:49.800264 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-02 00:55:49.800272 | orchestrator | Monday 02 June 2025 00:53:56 +0000 (0:00:02.541) 0:00:46.851 *********** 2025-06-02 00:55:49.800280 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:55:49.800288 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:55:49.800296 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:55:49.800304 | orchestrator | 2025-06-02 00:55:49.800312 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-02 00:55:49.800320 | orchestrator | Monday 02 June 2025 00:53:57 +0000 (0:00:00.797) 0:00:47.649 *********** 2025-06-02 00:55:49.800328 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:55:49.800336 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:55:49.800344 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:55:49.800352 | orchestrator | 2025-06-02 00:55:49.800360 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-02 00:55:49.800368 | orchestrator | Monday 02 June 2025 00:53:57 +0000 (0:00:00.304) 0:00:47.953 *********** 2025-06-02 00:55:49.800382 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.800390 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:55:49.800398 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:55:49.800406 | orchestrator | 2025-06-02 00:55:49.800414 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-02 00:55:49.800422 | orchestrator | Monday 02 June 2025 00:53:58 +0000 (0:00:00.314) 0:00:48.268 *********** 2025-06-02 00:55:49.800430 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:55:49.800438 | orchestrator | 2025-06-02 00:55:49.800446 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-02 00:55:49.800454 | orchestrator | Monday 02 June 2025 00:54:11 +0000 (0:00:13.032) 0:01:01.301 *********** 2025-06-02 00:55:49.800462 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:55:49.800470 | orchestrator | 2025-06-02 00:55:49.800483 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 00:55:49.800491 | orchestrator | Monday 02 June 2025 00:54:20 +0000 (0:00:09.120) 0:01:10.421 *********** 2025-06-02 00:55:49.800499 | orchestrator | 2025-06-02 00:55:49.800507 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 00:55:49.800515 | orchestrator | Monday 02 June 2025 00:54:20 +0000 (0:00:00.245) 0:01:10.667 *********** 2025-06-02 00:55:49.800523 | orchestrator | 2025-06-02 00:55:49.800531 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 00:55:49.800539 | orchestrator | Monday 02 June 2025 00:54:20 +0000 (0:00:00.067) 0:01:10.735 *********** 2025-06-02 00:55:49.800547 | orchestrator | 2025-06-02 00:55:49.800555 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-02 00:55:49.800562 | orchestrator | Monday 02 June 2025 00:54:20 +0000 (0:00:00.059) 0:01:10.794 *********** 2025-06-02 00:55:49.800570 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:55:49.800578 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:55:49.800587 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:55:49.800595 | orchestrator | 2025-06-02 00:55:49.800603 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-02 00:55:49.800610 | orchestrator | Monday 02 June 2025 00:54:41 +0000 (0:00:21.037) 0:01:31.831 *********** 2025-06-02 00:55:49.800641 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:55:49.800649 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:55:49.800657 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:55:49.800665 | orchestrator | 2025-06-02 00:55:49.800674 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-02 00:55:49.800681 | orchestrator | Monday 02 June 2025 00:54:52 +0000 (0:00:10.479) 0:01:42.311 *********** 2025-06-02 00:55:49.800689 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:55:49.800697 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:55:49.800705 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:55:49.800713 | orchestrator | 2025-06-02 00:55:49.800721 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 00:55:49.800729 | orchestrator | Monday 02 June 2025 00:55:03 +0000 (0:00:11.671) 0:01:53.983 *********** 2025-06-02 00:55:49.800737 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:55:49.800745 | orchestrator | 2025-06-02 00:55:49.800753 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-02 00:55:49.800763 | orchestrator | Monday 02 June 2025 00:55:04 +0000 (0:00:00.677) 0:01:54.660 *********** 2025-06-02 00:55:49.800771 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:55:49.800782 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:55:49.800791 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:55:49.800800 | orchestrator | 2025-06-02 00:55:49.800809 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-02 00:55:49.800818 | orchestrator | Monday 02 June 2025 00:55:05 +0000 (0:00:00.664) 0:01:55.325 *********** 2025-06-02 00:55:49.800834 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:55:49.800843 | orchestrator | 2025-06-02 00:55:49.800853 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-02 00:55:49.800866 | orchestrator | Monday 02 June 2025 00:55:06 +0000 (0:00:01.667) 0:01:56.992 *********** 2025-06-02 00:55:49.800876 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-02 00:55:49.800884 | orchestrator | 2025-06-02 00:55:49.800892 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-02 00:55:49.800900 | orchestrator | Monday 02 June 2025 00:55:16 +0000 (0:00:09.421) 0:02:06.414 *********** 2025-06-02 00:55:49.800908 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-02 00:55:49.800916 | orchestrator | 2025-06-02 00:55:49.800924 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-02 00:55:49.800932 | orchestrator | Monday 02 June 2025 00:55:36 +0000 (0:00:19.821) 0:02:26.235 *********** 2025-06-02 00:55:49.800939 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-02 00:55:49.800948 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-02 00:55:49.800955 | orchestrator | 2025-06-02 00:55:49.800963 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-02 00:55:49.800972 | orchestrator | Monday 02 June 2025 00:55:42 +0000 (0:00:06.539) 0:02:32.774 *********** 2025-06-02 00:55:49.800979 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.800987 | orchestrator | 2025-06-02 00:55:49.800995 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-02 00:55:49.801003 | orchestrator | Monday 02 June 2025 00:55:42 +0000 (0:00:00.287) 0:02:33.062 *********** 2025-06-02 00:55:49.801011 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.801019 | orchestrator | 2025-06-02 00:55:49.801026 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-02 00:55:49.801034 | orchestrator | Monday 02 June 2025 00:55:42 +0000 (0:00:00.102) 0:02:33.164 *********** 2025-06-02 00:55:49.801042 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.801050 | orchestrator | 2025-06-02 00:55:49.801058 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-02 00:55:49.801066 | orchestrator | Monday 02 June 2025 00:55:43 +0000 (0:00:00.107) 0:02:33.271 *********** 2025-06-02 00:55:49.801074 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.801082 | orchestrator | 2025-06-02 00:55:49.801090 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-02 00:55:49.801098 | orchestrator | Monday 02 June 2025 00:55:43 +0000 (0:00:00.291) 0:02:33.563 *********** 2025-06-02 00:55:49.801106 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:55:49.801114 | orchestrator | 2025-06-02 00:55:49.801122 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 00:55:49.801130 | orchestrator | Monday 02 June 2025 00:55:46 +0000 (0:00:02.953) 0:02:36.517 *********** 2025-06-02 00:55:49.801137 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:55:49.801217 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:55:49.801227 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:55:49.801236 | orchestrator | 2025-06-02 00:55:49.801249 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:55:49.801258 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-02 00:55:49.801267 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 00:55:49.801275 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 00:55:49.801283 | orchestrator | 2025-06-02 00:55:49.801291 | orchestrator | 2025-06-02 00:55:49.801299 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:55:49.801313 | orchestrator | Monday 02 June 2025 00:55:47 +0000 (0:00:00.678) 0:02:37.195 *********** 2025-06-02 00:55:49.801321 | orchestrator | =============================================================================== 2025-06-02 00:55:49.801329 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 21.04s 2025-06-02 00:55:49.801337 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.82s 2025-06-02 00:55:49.801345 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.03s 2025-06-02 00:55:49.801353 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.67s 2025-06-02 00:55:49.801361 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.48s 2025-06-02 00:55:49.801369 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.42s 2025-06-02 00:55:49.801377 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.12s 2025-06-02 00:55:49.801385 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.56s 2025-06-02 00:55:49.801393 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.54s 2025-06-02 00:55:49.801403 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.00s 2025-06-02 00:55:49.801416 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.53s 2025-06-02 00:55:49.801429 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.30s 2025-06-02 00:55:49.801443 | orchestrator | keystone : Creating default user role ----------------------------------- 2.95s 2025-06-02 00:55:49.801456 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.54s 2025-06-02 00:55:49.801465 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.47s 2025-06-02 00:55:49.801473 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.27s 2025-06-02 00:55:49.801486 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.18s 2025-06-02 00:55:49.801499 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.15s 2025-06-02 00:55:49.801514 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.84s 2025-06-02 00:55:49.801526 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.67s 2025-06-02 00:55:49.801535 | orchestrator | 2025-06-02 00:55:49 | INFO  | Task 3d2892ee-c419-4d78-99d3-36f7fa33a79a is in state STARTED 2025-06-02 00:55:49.801543 | orchestrator | 2025-06-02 00:55:49 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:55:49.801601 | orchestrator | 2025-06-02 00:55:49 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:55:49.801610 | orchestrator | 2025-06-02 00:55:49 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:55:49.801771 | orchestrator | 2025-06-02 00:55:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:52.853285 | orchestrator | 2025-06-02 00:55:52 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:55:52.854076 | orchestrator | 2025-06-02 00:55:52 | INFO  | Task 3d2892ee-c419-4d78-99d3-36f7fa33a79a is in state STARTED 2025-06-02 00:55:52.854131 | orchestrator | 2025-06-02 00:55:52 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:55:52.854155 | orchestrator | 2025-06-02 00:55:52 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:55:52.854175 | orchestrator | 2025-06-02 00:55:52 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:55:52.854195 | orchestrator | 2025-06-02 00:55:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:55.906785 | orchestrator | 2025-06-02 00:55:55 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:55:55.908926 | orchestrator | 2025-06-02 00:55:55 | INFO  | Task 3d2892ee-c419-4d78-99d3-36f7fa33a79a is in state STARTED 2025-06-02 00:55:55.910200 | orchestrator | 2025-06-02 00:55:55 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:55:55.911371 | orchestrator | 2025-06-02 00:55:55 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:55:55.912448 | orchestrator | 2025-06-02 00:55:55 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:55:55.912470 | orchestrator | 2025-06-02 00:55:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:55:58.950260 | orchestrator | 2025-06-02 00:55:58 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:55:58.950464 | orchestrator | 2025-06-02 00:55:58 | INFO  | Task 3d2892ee-c419-4d78-99d3-36f7fa33a79a is in state STARTED 2025-06-02 00:55:58.951146 | orchestrator | 2025-06-02 00:55:58 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:55:58.951585 | orchestrator | 2025-06-02 00:55:58 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:55:58.953000 | orchestrator | 2025-06-02 00:55:58 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:55:58.953124 | orchestrator | 2025-06-02 00:55:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:01.979874 | orchestrator | 2025-06-02 00:56:01 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:01.980782 | orchestrator | 2025-06-02 00:56:01 | INFO  | Task 3d2892ee-c419-4d78-99d3-36f7fa33a79a is in state STARTED 2025-06-02 00:56:01.981431 | orchestrator | 2025-06-02 00:56:01 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:01.983624 | orchestrator | 2025-06-02 00:56:01 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:01.984377 | orchestrator | 2025-06-02 00:56:01 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:01.984406 | orchestrator | 2025-06-02 00:56:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:05.016553 | orchestrator | 2025-06-02 00:56:05 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:05.016721 | orchestrator | 2025-06-02 00:56:05 | INFO  | Task 3d2892ee-c419-4d78-99d3-36f7fa33a79a is in state STARTED 2025-06-02 00:56:05.016738 | orchestrator | 2025-06-02 00:56:05 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:05.016765 | orchestrator | 2025-06-02 00:56:05 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:05.016777 | orchestrator | 2025-06-02 00:56:05 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:05.016789 | orchestrator | 2025-06-02 00:56:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:08.042231 | orchestrator | 2025-06-02 00:56:08 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:08.043558 | orchestrator | 2025-06-02 00:56:08 | INFO  | Task 3d2892ee-c419-4d78-99d3-36f7fa33a79a is in state STARTED 2025-06-02 00:56:08.044663 | orchestrator | 2025-06-02 00:56:08 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:08.045072 | orchestrator | 2025-06-02 00:56:08 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:08.047535 | orchestrator | 2025-06-02 00:56:08 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:08.047590 | orchestrator | 2025-06-02 00:56:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:11.079924 | orchestrator | 2025-06-02 00:56:11 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:11.080670 | orchestrator | 2025-06-02 00:56:11 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:11.081287 | orchestrator | 2025-06-02 00:56:11 | INFO  | Task 3d2892ee-c419-4d78-99d3-36f7fa33a79a is in state SUCCESS 2025-06-02 00:56:11.081725 | orchestrator | 2025-06-02 00:56:11 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:11.082435 | orchestrator | 2025-06-02 00:56:11 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:11.083213 | orchestrator | 2025-06-02 00:56:11 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:11.083245 | orchestrator | 2025-06-02 00:56:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:14.122176 | orchestrator | 2025-06-02 00:56:14 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:14.122387 | orchestrator | 2025-06-02 00:56:14 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:14.124755 | orchestrator | 2025-06-02 00:56:14 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:14.125772 | orchestrator | 2025-06-02 00:56:14 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:14.127160 | orchestrator | 2025-06-02 00:56:14 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:14.127189 | orchestrator | 2025-06-02 00:56:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:17.157933 | orchestrator | 2025-06-02 00:56:17 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:17.159970 | orchestrator | 2025-06-02 00:56:17 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:17.162145 | orchestrator | 2025-06-02 00:56:17 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:17.164512 | orchestrator | 2025-06-02 00:56:17 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:17.165720 | orchestrator | 2025-06-02 00:56:17 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:17.165954 | orchestrator | 2025-06-02 00:56:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:20.195743 | orchestrator | 2025-06-02 00:56:20 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:20.198478 | orchestrator | 2025-06-02 00:56:20 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:20.198511 | orchestrator | 2025-06-02 00:56:20 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:20.198524 | orchestrator | 2025-06-02 00:56:20 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:20.198535 | orchestrator | 2025-06-02 00:56:20 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:20.198547 | orchestrator | 2025-06-02 00:56:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:23.228049 | orchestrator | 2025-06-02 00:56:23 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:23.228166 | orchestrator | 2025-06-02 00:56:23 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:23.229153 | orchestrator | 2025-06-02 00:56:23 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:23.229194 | orchestrator | 2025-06-02 00:56:23 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:23.229937 | orchestrator | 2025-06-02 00:56:23 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:23.229968 | orchestrator | 2025-06-02 00:56:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:26.254349 | orchestrator | 2025-06-02 00:56:26 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:26.254433 | orchestrator | 2025-06-02 00:56:26 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:26.255521 | orchestrator | 2025-06-02 00:56:26 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:26.255956 | orchestrator | 2025-06-02 00:56:26 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:26.256484 | orchestrator | 2025-06-02 00:56:26 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:26.256505 | orchestrator | 2025-06-02 00:56:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:29.277540 | orchestrator | 2025-06-02 00:56:29 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:29.277822 | orchestrator | 2025-06-02 00:56:29 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:29.278199 | orchestrator | 2025-06-02 00:56:29 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:29.278765 | orchestrator | 2025-06-02 00:56:29 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:29.279390 | orchestrator | 2025-06-02 00:56:29 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:29.279425 | orchestrator | 2025-06-02 00:56:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:32.305787 | orchestrator | 2025-06-02 00:56:32 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:32.305922 | orchestrator | 2025-06-02 00:56:32 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:32.306004 | orchestrator | 2025-06-02 00:56:32 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:32.306571 | orchestrator | 2025-06-02 00:56:32 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:32.307266 | orchestrator | 2025-06-02 00:56:32 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:32.307287 | orchestrator | 2025-06-02 00:56:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:35.340862 | orchestrator | 2025-06-02 00:56:35 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:35.341010 | orchestrator | 2025-06-02 00:56:35 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:35.341135 | orchestrator | 2025-06-02 00:56:35 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:35.342437 | orchestrator | 2025-06-02 00:56:35 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:35.342986 | orchestrator | 2025-06-02 00:56:35 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:35.343009 | orchestrator | 2025-06-02 00:56:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:38.372655 | orchestrator | 2025-06-02 00:56:38 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:38.372741 | orchestrator | 2025-06-02 00:56:38 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:38.372926 | orchestrator | 2025-06-02 00:56:38 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:38.375255 | orchestrator | 2025-06-02 00:56:38 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:38.375950 | orchestrator | 2025-06-02 00:56:38 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:38.375977 | orchestrator | 2025-06-02 00:56:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:41.419318 | orchestrator | 2025-06-02 00:56:41 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:41.419523 | orchestrator | 2025-06-02 00:56:41 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:41.420355 | orchestrator | 2025-06-02 00:56:41 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:41.420763 | orchestrator | 2025-06-02 00:56:41 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:41.421764 | orchestrator | 2025-06-02 00:56:41 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:41.421837 | orchestrator | 2025-06-02 00:56:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:44.445178 | orchestrator | 2025-06-02 00:56:44 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:44.445547 | orchestrator | 2025-06-02 00:56:44 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:44.446505 | orchestrator | 2025-06-02 00:56:44 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:44.448070 | orchestrator | 2025-06-02 00:56:44 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:44.448959 | orchestrator | 2025-06-02 00:56:44 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:44.448991 | orchestrator | 2025-06-02 00:56:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:47.477087 | orchestrator | 2025-06-02 00:56:47 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:47.477457 | orchestrator | 2025-06-02 00:56:47 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:47.478162 | orchestrator | 2025-06-02 00:56:47 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state STARTED 2025-06-02 00:56:47.479997 | orchestrator | 2025-06-02 00:56:47 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:47.481394 | orchestrator | 2025-06-02 00:56:47 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:47.481423 | orchestrator | 2025-06-02 00:56:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:50.503941 | orchestrator | 2025-06-02 00:56:50 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:50.504842 | orchestrator | 2025-06-02 00:56:50 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:50.505287 | orchestrator | 2025-06-02 00:56:50 | INFO  | Task 3672ea5d-9715-48e2-8e48-565ce195c939 is in state SUCCESS 2025-06-02 00:56:50.505654 | orchestrator | 2025-06-02 00:56:50.505679 | orchestrator | 2025-06-02 00:56:50.505692 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:56:50.505726 | orchestrator | 2025-06-02 00:56:50.505738 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:56:50.505750 | orchestrator | Monday 02 June 2025 00:55:39 +0000 (0:00:00.232) 0:00:00.232 *********** 2025-06-02 00:56:50.505762 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:56:50.505775 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:56:50.505786 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:56:50.505798 | orchestrator | ok: [testbed-manager] 2025-06-02 00:56:50.505809 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:56:50.505820 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:56:50.505831 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:56:50.505842 | orchestrator | 2025-06-02 00:56:50.505854 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:56:50.505865 | orchestrator | Monday 02 June 2025 00:55:39 +0000 (0:00:00.715) 0:00:00.948 *********** 2025-06-02 00:56:50.505876 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-02 00:56:50.505887 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-02 00:56:50.505898 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-02 00:56:50.505909 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-02 00:56:50.505920 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-02 00:56:50.505931 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-02 00:56:50.505942 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-02 00:56:50.505953 | orchestrator | 2025-06-02 00:56:50.506123 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 00:56:50.506148 | orchestrator | 2025-06-02 00:56:50.506160 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-02 00:56:50.506171 | orchestrator | Monday 02 June 2025 00:55:40 +0000 (0:00:01.092) 0:00:02.040 *********** 2025-06-02 00:56:50.506182 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:56:50.506194 | orchestrator | 2025-06-02 00:56:50.506205 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-02 00:56:50.506216 | orchestrator | Monday 02 June 2025 00:55:42 +0000 (0:00:01.592) 0:00:03.633 *********** 2025-06-02 00:56:50.506227 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-02 00:56:50.506237 | orchestrator | 2025-06-02 00:56:50.506253 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-02 00:56:50.506264 | orchestrator | Monday 02 June 2025 00:55:45 +0000 (0:00:03.373) 0:00:07.007 *********** 2025-06-02 00:56:50.506278 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-02 00:56:50.506293 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-02 00:56:50.506305 | orchestrator | 2025-06-02 00:56:50.506318 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-02 00:56:50.506331 | orchestrator | Monday 02 June 2025 00:55:51 +0000 (0:00:05.375) 0:00:12.382 *********** 2025-06-02 00:56:50.506344 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 00:56:50.506356 | orchestrator | 2025-06-02 00:56:50.506369 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-02 00:56:50.506382 | orchestrator | Monday 02 June 2025 00:55:53 +0000 (0:00:02.728) 0:00:15.111 *********** 2025-06-02 00:56:50.506394 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 00:56:50.506406 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-02 00:56:50.506419 | orchestrator | 2025-06-02 00:56:50.506432 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-02 00:56:50.506444 | orchestrator | Monday 02 June 2025 00:55:57 +0000 (0:00:03.500) 0:00:18.611 *********** 2025-06-02 00:56:50.506467 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 00:56:50.506480 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-02 00:56:50.506493 | orchestrator | 2025-06-02 00:56:50.506506 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-02 00:56:50.506560 | orchestrator | Monday 02 June 2025 00:56:03 +0000 (0:00:06.272) 0:00:24.884 *********** 2025-06-02 00:56:50.506605 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-02 00:56:50.506619 | orchestrator | 2025-06-02 00:56:50.506632 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:56:50.506643 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:56:50.506654 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:56:50.506666 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:56:50.506677 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:56:50.506688 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:56:50.506714 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:56:50.506726 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:56:50.506737 | orchestrator | 2025-06-02 00:56:50.506748 | orchestrator | 2025-06-02 00:56:50.506759 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:56:50.506770 | orchestrator | Monday 02 June 2025 00:56:09 +0000 (0:00:05.288) 0:00:30.172 *********** 2025-06-02 00:56:50.506782 | orchestrator | =============================================================================== 2025-06-02 00:56:50.506793 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.27s 2025-06-02 00:56:50.506804 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.38s 2025-06-02 00:56:50.506814 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.29s 2025-06-02 00:56:50.506825 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.50s 2025-06-02 00:56:50.506836 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.37s 2025-06-02 00:56:50.506847 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.73s 2025-06-02 00:56:50.506857 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.59s 2025-06-02 00:56:50.506868 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.09s 2025-06-02 00:56:50.506879 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2025-06-02 00:56:50.506890 | orchestrator | 2025-06-02 00:56:50.506901 | orchestrator | 2025-06-02 00:56:50.506911 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-02 00:56:50.506922 | orchestrator | 2025-06-02 00:56:50.506933 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-02 00:56:50.506944 | orchestrator | Monday 02 June 2025 00:55:32 +0000 (0:00:00.250) 0:00:00.250 *********** 2025-06-02 00:56:50.506955 | orchestrator | changed: [testbed-manager] 2025-06-02 00:56:50.506966 | orchestrator | 2025-06-02 00:56:50.506977 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-02 00:56:50.506988 | orchestrator | Monday 02 June 2025 00:55:33 +0000 (0:00:01.347) 0:00:01.598 *********** 2025-06-02 00:56:50.506999 | orchestrator | changed: [testbed-manager] 2025-06-02 00:56:50.507010 | orchestrator | 2025-06-02 00:56:50.507029 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-02 00:56:50.507040 | orchestrator | Monday 02 June 2025 00:55:34 +0000 (0:00:00.912) 0:00:02.511 *********** 2025-06-02 00:56:50.507056 | orchestrator | changed: [testbed-manager] 2025-06-02 00:56:50.507067 | orchestrator | 2025-06-02 00:56:50.507078 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-02 00:56:50.507089 | orchestrator | Monday 02 June 2025 00:55:35 +0000 (0:00:00.917) 0:00:03.428 *********** 2025-06-02 00:56:50.507100 | orchestrator | changed: [testbed-manager] 2025-06-02 00:56:50.507111 | orchestrator | 2025-06-02 00:56:50.507122 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-02 00:56:50.507133 | orchestrator | Monday 02 June 2025 00:55:36 +0000 (0:00:01.008) 0:00:04.436 *********** 2025-06-02 00:56:50.507144 | orchestrator | changed: [testbed-manager] 2025-06-02 00:56:50.507155 | orchestrator | 2025-06-02 00:56:50.507166 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-02 00:56:50.507177 | orchestrator | Monday 02 June 2025 00:55:37 +0000 (0:00:01.063) 0:00:05.500 *********** 2025-06-02 00:56:50.507188 | orchestrator | changed: [testbed-manager] 2025-06-02 00:56:50.507199 | orchestrator | 2025-06-02 00:56:50.507210 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-02 00:56:50.507221 | orchestrator | Monday 02 June 2025 00:55:38 +0000 (0:00:00.867) 0:00:06.368 *********** 2025-06-02 00:56:50.507231 | orchestrator | changed: [testbed-manager] 2025-06-02 00:56:50.507242 | orchestrator | 2025-06-02 00:56:50.507253 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-02 00:56:50.507292 | orchestrator | Monday 02 June 2025 00:55:39 +0000 (0:00:01.046) 0:00:07.415 *********** 2025-06-02 00:56:50.507305 | orchestrator | changed: [testbed-manager] 2025-06-02 00:56:50.507339 | orchestrator | 2025-06-02 00:56:50.507352 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-02 00:56:50.507363 | orchestrator | Monday 02 June 2025 00:55:40 +0000 (0:00:00.872) 0:00:08.287 *********** 2025-06-02 00:56:50.507374 | orchestrator | changed: [testbed-manager] 2025-06-02 00:56:50.507385 | orchestrator | 2025-06-02 00:56:50.507396 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-02 00:56:50.507407 | orchestrator | Monday 02 June 2025 00:56:25 +0000 (0:00:44.633) 0:00:52.921 *********** 2025-06-02 00:56:50.507417 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:56:50.507428 | orchestrator | 2025-06-02 00:56:50.507439 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 00:56:50.507450 | orchestrator | 2025-06-02 00:56:50.507460 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 00:56:50.507471 | orchestrator | Monday 02 June 2025 00:56:25 +0000 (0:00:00.106) 0:00:53.027 *********** 2025-06-02 00:56:50.507482 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:56:50.507493 | orchestrator | 2025-06-02 00:56:50.507504 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 00:56:50.507515 | orchestrator | 2025-06-02 00:56:50.507526 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 00:56:50.507537 | orchestrator | Monday 02 June 2025 00:56:26 +0000 (0:00:01.380) 0:00:54.408 *********** 2025-06-02 00:56:50.507548 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:56:50.507559 | orchestrator | 2025-06-02 00:56:50.507587 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 00:56:50.507599 | orchestrator | 2025-06-02 00:56:50.507610 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 00:56:50.507621 | orchestrator | Monday 02 June 2025 00:56:37 +0000 (0:00:11.169) 0:01:05.578 *********** 2025-06-02 00:56:50.507632 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:56:50.507643 | orchestrator | 2025-06-02 00:56:50.507661 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:56:50.507673 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 00:56:50.507691 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:56:50.507702 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:56:50.507713 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 00:56:50.507724 | orchestrator | 2025-06-02 00:56:50.507735 | orchestrator | 2025-06-02 00:56:50.507746 | orchestrator | 2025-06-02 00:56:50.507757 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:56:50.507768 | orchestrator | Monday 02 June 2025 00:56:49 +0000 (0:00:11.061) 0:01:16.640 *********** 2025-06-02 00:56:50.507778 | orchestrator | =============================================================================== 2025-06-02 00:56:50.507789 | orchestrator | Create admin user ------------------------------------------------------ 44.63s 2025-06-02 00:56:50.507800 | orchestrator | Restart ceph manager service ------------------------------------------- 23.61s 2025-06-02 00:56:50.507811 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.35s 2025-06-02 00:56:50.507822 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2025-06-02 00:56:50.507833 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.05s 2025-06-02 00:56:50.507844 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.01s 2025-06-02 00:56:50.507854 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.92s 2025-06-02 00:56:50.507865 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.91s 2025-06-02 00:56:50.507876 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.87s 2025-06-02 00:56:50.507887 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.87s 2025-06-02 00:56:50.507903 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.11s 2025-06-02 00:56:50.508015 | orchestrator | 2025-06-02 00:56:50 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:50.508030 | orchestrator | 2025-06-02 00:56:50 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:50.508041 | orchestrator | 2025-06-02 00:56:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:53.541947 | orchestrator | 2025-06-02 00:56:53 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:53.542086 | orchestrator | 2025-06-02 00:56:53 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:53.542692 | orchestrator | 2025-06-02 00:56:53 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:53.543928 | orchestrator | 2025-06-02 00:56:53 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:53.543953 | orchestrator | 2025-06-02 00:56:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:56.561986 | orchestrator | 2025-06-02 00:56:56 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:56.562237 | orchestrator | 2025-06-02 00:56:56 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:56.562746 | orchestrator | 2025-06-02 00:56:56 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:56.563450 | orchestrator | 2025-06-02 00:56:56 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:56.563475 | orchestrator | 2025-06-02 00:56:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:56:59.584041 | orchestrator | 2025-06-02 00:56:59 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:56:59.584192 | orchestrator | 2025-06-02 00:56:59 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:56:59.584629 | orchestrator | 2025-06-02 00:56:59 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:56:59.585147 | orchestrator | 2025-06-02 00:56:59 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:56:59.585217 | orchestrator | 2025-06-02 00:56:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:02.619090 | orchestrator | 2025-06-02 00:57:02 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:02.619312 | orchestrator | 2025-06-02 00:57:02 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:02.620059 | orchestrator | 2025-06-02 00:57:02 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:02.620546 | orchestrator | 2025-06-02 00:57:02 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:02.620612 | orchestrator | 2025-06-02 00:57:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:05.648649 | orchestrator | 2025-06-02 00:57:05 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:05.649840 | orchestrator | 2025-06-02 00:57:05 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:05.650318 | orchestrator | 2025-06-02 00:57:05 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:05.650949 | orchestrator | 2025-06-02 00:57:05 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:05.650977 | orchestrator | 2025-06-02 00:57:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:08.679323 | orchestrator | 2025-06-02 00:57:08 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:08.683682 | orchestrator | 2025-06-02 00:57:08 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:08.684295 | orchestrator | 2025-06-02 00:57:08 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:08.685337 | orchestrator | 2025-06-02 00:57:08 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:08.685363 | orchestrator | 2025-06-02 00:57:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:11.723607 | orchestrator | 2025-06-02 00:57:11 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:11.726336 | orchestrator | 2025-06-02 00:57:11 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:11.727066 | orchestrator | 2025-06-02 00:57:11 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:11.729180 | orchestrator | 2025-06-02 00:57:11 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:11.729207 | orchestrator | 2025-06-02 00:57:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:14.765542 | orchestrator | 2025-06-02 00:57:14 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:14.767717 | orchestrator | 2025-06-02 00:57:14 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:14.769659 | orchestrator | 2025-06-02 00:57:14 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:14.770518 | orchestrator | 2025-06-02 00:57:14 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:14.770548 | orchestrator | 2025-06-02 00:57:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:17.807009 | orchestrator | 2025-06-02 00:57:17 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:17.807732 | orchestrator | 2025-06-02 00:57:17 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:17.810194 | orchestrator | 2025-06-02 00:57:17 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:17.811334 | orchestrator | 2025-06-02 00:57:17 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:17.811362 | orchestrator | 2025-06-02 00:57:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:20.853896 | orchestrator | 2025-06-02 00:57:20 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:20.856109 | orchestrator | 2025-06-02 00:57:20 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:20.857542 | orchestrator | 2025-06-02 00:57:20 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:20.858525 | orchestrator | 2025-06-02 00:57:20 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:20.859338 | orchestrator | 2025-06-02 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:23.909603 | orchestrator | 2025-06-02 00:57:23 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:23.910816 | orchestrator | 2025-06-02 00:57:23 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:23.912376 | orchestrator | 2025-06-02 00:57:23 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:23.914099 | orchestrator | 2025-06-02 00:57:23 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:23.914133 | orchestrator | 2025-06-02 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:26.949093 | orchestrator | 2025-06-02 00:57:26 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:26.949700 | orchestrator | 2025-06-02 00:57:26 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:26.950480 | orchestrator | 2025-06-02 00:57:26 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:26.951512 | orchestrator | 2025-06-02 00:57:26 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:26.953368 | orchestrator | 2025-06-02 00:57:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:29.998362 | orchestrator | 2025-06-02 00:57:29 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:30.000223 | orchestrator | 2025-06-02 00:57:29 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:30.003444 | orchestrator | 2025-06-02 00:57:30 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:30.005927 | orchestrator | 2025-06-02 00:57:30 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:30.005945 | orchestrator | 2025-06-02 00:57:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:33.055824 | orchestrator | 2025-06-02 00:57:33 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:33.055946 | orchestrator | 2025-06-02 00:57:33 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:33.057211 | orchestrator | 2025-06-02 00:57:33 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:33.058758 | orchestrator | 2025-06-02 00:57:33 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:33.058948 | orchestrator | 2025-06-02 00:57:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:36.111455 | orchestrator | 2025-06-02 00:57:36 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:36.112757 | orchestrator | 2025-06-02 00:57:36 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:36.115714 | orchestrator | 2025-06-02 00:57:36 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:36.119073 | orchestrator | 2025-06-02 00:57:36 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:36.119139 | orchestrator | 2025-06-02 00:57:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:39.172276 | orchestrator | 2025-06-02 00:57:39 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:39.172883 | orchestrator | 2025-06-02 00:57:39 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:39.174638 | orchestrator | 2025-06-02 00:57:39 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:39.176438 | orchestrator | 2025-06-02 00:57:39 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:39.176493 | orchestrator | 2025-06-02 00:57:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:42.235521 | orchestrator | 2025-06-02 00:57:42 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:42.236586 | orchestrator | 2025-06-02 00:57:42 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:42.236726 | orchestrator | 2025-06-02 00:57:42 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:42.238340 | orchestrator | 2025-06-02 00:57:42 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:42.238370 | orchestrator | 2025-06-02 00:57:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:45.299860 | orchestrator | 2025-06-02 00:57:45 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:45.299966 | orchestrator | 2025-06-02 00:57:45 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:45.302069 | orchestrator | 2025-06-02 00:57:45 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:45.303056 | orchestrator | 2025-06-02 00:57:45 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:45.303084 | orchestrator | 2025-06-02 00:57:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:48.361789 | orchestrator | 2025-06-02 00:57:48 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:48.363492 | orchestrator | 2025-06-02 00:57:48 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:48.363622 | orchestrator | 2025-06-02 00:57:48 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:48.363638 | orchestrator | 2025-06-02 00:57:48 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:48.363724 | orchestrator | 2025-06-02 00:57:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:51.458491 | orchestrator | 2025-06-02 00:57:51 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:51.461019 | orchestrator | 2025-06-02 00:57:51 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:51.464655 | orchestrator | 2025-06-02 00:57:51 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:51.465188 | orchestrator | 2025-06-02 00:57:51 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:51.465218 | orchestrator | 2025-06-02 00:57:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:54.514941 | orchestrator | 2025-06-02 00:57:54 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:54.515723 | orchestrator | 2025-06-02 00:57:54 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:54.517398 | orchestrator | 2025-06-02 00:57:54 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:54.518608 | orchestrator | 2025-06-02 00:57:54 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:54.518648 | orchestrator | 2025-06-02 00:57:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:57:57.558316 | orchestrator | 2025-06-02 00:57:57 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:57:57.558515 | orchestrator | 2025-06-02 00:57:57 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:57:57.558922 | orchestrator | 2025-06-02 00:57:57 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:57:57.559678 | orchestrator | 2025-06-02 00:57:57 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:57:57.559703 | orchestrator | 2025-06-02 00:57:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:00.604231 | orchestrator | 2025-06-02 00:58:00 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:58:00.606770 | orchestrator | 2025-06-02 00:58:00 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:00.608027 | orchestrator | 2025-06-02 00:58:00 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:00.609256 | orchestrator | 2025-06-02 00:58:00 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:00.610747 | orchestrator | 2025-06-02 00:58:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:03.648861 | orchestrator | 2025-06-02 00:58:03 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:58:03.648952 | orchestrator | 2025-06-02 00:58:03 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:03.649158 | orchestrator | 2025-06-02 00:58:03 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:03.649790 | orchestrator | 2025-06-02 00:58:03 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:03.651125 | orchestrator | 2025-06-02 00:58:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:06.695486 | orchestrator | 2025-06-02 00:58:06 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:58:06.695632 | orchestrator | 2025-06-02 00:58:06 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:06.695650 | orchestrator | 2025-06-02 00:58:06 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:06.696313 | orchestrator | 2025-06-02 00:58:06 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:06.696364 | orchestrator | 2025-06-02 00:58:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:09.735422 | orchestrator | 2025-06-02 00:58:09 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:58:09.736706 | orchestrator | 2025-06-02 00:58:09 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:09.738073 | orchestrator | 2025-06-02 00:58:09 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:09.740093 | orchestrator | 2025-06-02 00:58:09 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:09.740125 | orchestrator | 2025-06-02 00:58:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:12.809272 | orchestrator | 2025-06-02 00:58:12 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:58:12.812742 | orchestrator | 2025-06-02 00:58:12 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:12.814325 | orchestrator | 2025-06-02 00:58:12 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:12.816590 | orchestrator | 2025-06-02 00:58:12 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:12.816607 | orchestrator | 2025-06-02 00:58:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:15.853795 | orchestrator | 2025-06-02 00:58:15 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:58:15.855765 | orchestrator | 2025-06-02 00:58:15 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:15.858311 | orchestrator | 2025-06-02 00:58:15 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:15.859857 | orchestrator | 2025-06-02 00:58:15 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:15.860662 | orchestrator | 2025-06-02 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:18.906585 | orchestrator | 2025-06-02 00:58:18 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state STARTED 2025-06-02 00:58:18.908399 | orchestrator | 2025-06-02 00:58:18 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:18.911398 | orchestrator | 2025-06-02 00:58:18 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:18.913058 | orchestrator | 2025-06-02 00:58:18 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:18.913094 | orchestrator | 2025-06-02 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:21.955644 | orchestrator | 2025-06-02 00:58:21 | INFO  | Task ed2f444e-cc0a-4396-aa72-3f454a230010 is in state SUCCESS 2025-06-02 00:58:21.957142 | orchestrator | 2025-06-02 00:58:21.957191 | orchestrator | 2025-06-02 00:58:21.957205 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:58:21.957217 | orchestrator | 2025-06-02 00:58:21.957229 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:58:21.957241 | orchestrator | Monday 02 June 2025 00:55:38 +0000 (0:00:00.274) 0:00:00.274 *********** 2025-06-02 00:58:21.957252 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:58:21.957266 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:58:21.957278 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:58:21.957289 | orchestrator | 2025-06-02 00:58:21.957301 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:58:21.957312 | orchestrator | Monday 02 June 2025 00:55:39 +0000 (0:00:00.305) 0:00:00.580 *********** 2025-06-02 00:58:21.957347 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-02 00:58:21.957359 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-02 00:58:21.957370 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-02 00:58:21.957381 | orchestrator | 2025-06-02 00:58:21.957392 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-02 00:58:21.957403 | orchestrator | 2025-06-02 00:58:21.957414 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 00:58:21.957425 | orchestrator | Monday 02 June 2025 00:55:39 +0000 (0:00:00.379) 0:00:00.960 *********** 2025-06-02 00:58:21.957436 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:58:21.957447 | orchestrator | 2025-06-02 00:58:21.957458 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-02 00:58:21.957469 | orchestrator | Monday 02 June 2025 00:55:40 +0000 (0:00:00.483) 0:00:01.443 *********** 2025-06-02 00:58:21.957480 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-02 00:58:21.957491 | orchestrator | 2025-06-02 00:58:21.957535 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-02 00:58:21.957547 | orchestrator | Monday 02 June 2025 00:55:43 +0000 (0:00:03.845) 0:00:05.289 *********** 2025-06-02 00:58:21.957558 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-02 00:58:21.957570 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-02 00:58:21.957581 | orchestrator | 2025-06-02 00:58:21.957591 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-02 00:58:21.957602 | orchestrator | Monday 02 June 2025 00:55:49 +0000 (0:00:05.543) 0:00:10.833 *********** 2025-06-02 00:58:21.957613 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-02 00:58:21.957624 | orchestrator | 2025-06-02 00:58:21.957635 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-02 00:58:21.957645 | orchestrator | Monday 02 June 2025 00:55:52 +0000 (0:00:02.787) 0:00:13.620 *********** 2025-06-02 00:58:21.957658 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 00:58:21.957669 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-02 00:58:21.957680 | orchestrator | 2025-06-02 00:58:21.957694 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-02 00:58:21.957706 | orchestrator | Monday 02 June 2025 00:55:55 +0000 (0:00:03.538) 0:00:17.158 *********** 2025-06-02 00:58:21.957719 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 00:58:21.957731 | orchestrator | 2025-06-02 00:58:21.957744 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-02 00:58:21.957756 | orchestrator | Monday 02 June 2025 00:55:58 +0000 (0:00:03.127) 0:00:20.286 *********** 2025-06-02 00:58:21.957769 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-02 00:58:21.957781 | orchestrator | 2025-06-02 00:58:21.957793 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-02 00:58:21.957805 | orchestrator | Monday 02 June 2025 00:56:03 +0000 (0:00:04.543) 0:00:24.829 *********** 2025-06-02 00:58:21.957855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:58:21.957882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:58:21.957901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:58:21.957956 | orchestrator | 2025-06-02 00:58:21.957969 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 00:58:21.957980 | orchestrator | Monday 02 June 2025 00:56:09 +0000 (0:00:06.157) 0:00:30.987 *********** 2025-06-02 00:58:21.957999 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:58:21.958011 | orchestrator | 2025-06-02 00:58:21.958078 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-02 00:58:21.958090 | orchestrator | Monday 02 June 2025 00:56:10 +0000 (0:00:00.608) 0:00:31.596 *********** 2025-06-02 00:58:21.958101 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:21.958113 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:58:21.958124 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:58:21.958135 | orchestrator | 2025-06-02 00:58:21.958146 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-02 00:58:21.958157 | orchestrator | Monday 02 June 2025 00:56:13 +0000 (0:00:03.327) 0:00:34.923 *********** 2025-06-02 00:58:21.958168 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:58:21.958179 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:58:21.958190 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:58:21.958201 | orchestrator | 2025-06-02 00:58:21.958211 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-02 00:58:21.958223 | orchestrator | Monday 02 June 2025 00:56:14 +0000 (0:00:01.376) 0:00:36.299 *********** 2025-06-02 00:58:21.958234 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:58:21.958245 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:58:21.958256 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:58:21.958267 | orchestrator | 2025-06-02 00:58:21.958278 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-02 00:58:21.958289 | orchestrator | Monday 02 June 2025 00:56:15 +0000 (0:00:00.980) 0:00:37.280 *********** 2025-06-02 00:58:21.958300 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:58:21.958311 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:58:21.958323 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:58:21.958334 | orchestrator | 2025-06-02 00:58:21.958345 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-02 00:58:21.958356 | orchestrator | Monday 02 June 2025 00:56:16 +0000 (0:00:00.801) 0:00:38.081 *********** 2025-06-02 00:58:21.958367 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:21.958378 | orchestrator | 2025-06-02 00:58:21.958388 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-02 00:58:21.958399 | orchestrator | Monday 02 June 2025 00:56:16 +0000 (0:00:00.133) 0:00:38.215 *********** 2025-06-02 00:58:21.958410 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:21.958421 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:21.958432 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:21.958443 | orchestrator | 2025-06-02 00:58:21.958454 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 00:58:21.958465 | orchestrator | Monday 02 June 2025 00:56:17 +0000 (0:00:00.279) 0:00:38.495 *********** 2025-06-02 00:58:21.958484 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:58:21.958495 | orchestrator | 2025-06-02 00:58:21.958524 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-02 00:58:21.958535 | orchestrator | Monday 02 June 2025 00:56:17 +0000 (0:00:00.489) 0:00:38.984 *********** 2025-06-02 00:58:21.958562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:58:21.958577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:58:21.958595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:58:21.958615 | orchestrator | 2025-06-02 00:58:21.958626 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-02 00:58:21.958637 | orchestrator | Monday 02 June 2025 00:56:21 +0000 (0:00:03.977) 0:00:42.961 *********** 2025-06-02 00:58:21.958657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:58:21.958670 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:21.958682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:58:21.958701 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:21.958725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:58:21.958738 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:21.958749 | orchestrator | 2025-06-02 00:58:21.958760 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-02 00:58:21.958771 | orchestrator | Monday 02 June 2025 00:56:25 +0000 (0:00:04.104) 0:00:47.066 *********** 2025-06-02 00:58:21.958783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:58:21.958801 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:21.958824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:58:21.958836 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:21.958848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:58:21.958869 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:21.958880 | orchestrator | 2025-06-02 00:58:21.958891 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-02 00:58:21.958902 | orchestrator | Monday 02 June 2025 00:56:29 +0000 (0:00:03.951) 0:00:51.017 *********** 2025-06-02 00:58:21.958913 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:21.958924 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:21.958935 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:21.958946 | orchestrator | 2025-06-02 00:58:21.958957 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-02 00:58:21.958968 | orchestrator | Monday 02 June 2025 00:56:33 +0000 (0:00:03.583) 0:00:54.601 *********** 2025-06-02 00:58:21.958996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:58:21.959010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:58:21.959037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:58:21.959059 | orchestrator | 2025-06-02 00:58:21.959078 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-02 00:58:21.959096 | orchestrator | Monday 02 June 2025 00:56:37 +0000 (0:00:03.955) 0:00:58.556 *********** 2025-06-02 00:58:21.959114 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:21.959137 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:58:21.959163 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:58:21.959181 | orchestrator | 2025-06-02 00:58:21.959201 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-02 00:58:21.959231 | orchestrator | Monday 02 June 2025 00:56:44 +0000 (0:00:07.561) 0:01:06.119 *********** 2025-06-02 00:58:21.959251 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:21.959269 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:21.959288 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:21.959306 | orchestrator | 2025-06-02 00:58:21.959325 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-02 00:58:21.959344 | orchestrator | Monday 02 June 2025 00:56:49 +0000 (0:00:04.801) 0:01:10.920 *********** 2025-06-02 00:58:21.959362 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:21.959381 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:21.959399 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:21.959417 | orchestrator | 2025-06-02 00:58:21.959436 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-02 00:58:21.959455 | orchestrator | Monday 02 June 2025 00:56:55 +0000 (0:00:05.873) 0:01:16.794 *********** 2025-06-02 00:58:21.959486 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:21.959533 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:21.959554 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:21.959573 | orchestrator | 2025-06-02 00:58:21.959593 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-02 00:58:21.959612 | orchestrator | Monday 02 June 2025 00:57:01 +0000 (0:00:06.377) 0:01:23.171 *********** 2025-06-02 00:58:21.959631 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:21.959650 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:21.959668 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:21.959688 | orchestrator | 2025-06-02 00:58:21.959706 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-02 00:58:21.959725 | orchestrator | Monday 02 June 2025 00:57:05 +0000 (0:00:04.026) 0:01:27.198 *********** 2025-06-02 00:58:21.959786 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:21.959806 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:21.959824 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:21.959844 | orchestrator | 2025-06-02 00:58:21.959863 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-02 00:58:21.959882 | orchestrator | Monday 02 June 2025 00:57:06 +0000 (0:00:00.289) 0:01:27.488 *********** 2025-06-02 00:58:21.959902 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 00:58:21.959921 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:21.959940 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 00:58:21.959959 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:21.959978 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 00:58:21.959998 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:21.960017 | orchestrator | 2025-06-02 00:58:21.960035 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-02 00:58:21.960055 | orchestrator | Monday 02 June 2025 00:57:09 +0000 (0:00:03.513) 0:01:31.001 *********** 2025-06-02 00:58:21.960083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:58:21.960122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:58:21.960158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:58:21.960178 | orchestrator | 2025-06-02 00:58:21.960198 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 00:58:21.960218 | orchestrator | Monday 02 June 2025 00:57:12 +0000 (0:00:03.012) 0:01:34.014 *********** 2025-06-02 00:58:21.960237 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:21.960262 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:21.960281 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:21.960299 | orchestrator | 2025-06-02 00:58:21.960318 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-02 00:58:21.960336 | orchestrator | Monday 02 June 2025 00:57:12 +0000 (0:00:00.229) 0:01:34.243 *********** 2025-06-02 00:58:21.960369 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:21.960389 | orchestrator | 2025-06-02 00:58:21.960408 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-02 00:58:21.960427 | orchestrator | Monday 02 June 2025 00:57:14 +0000 (0:00:01.897) 0:01:36.141 *********** 2025-06-02 00:58:21.960446 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:21.960466 | orchestrator | 2025-06-02 00:58:21.960481 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-02 00:58:21.960492 | orchestrator | Monday 02 June 2025 00:57:16 +0000 (0:00:02.069) 0:01:38.210 *********** 2025-06-02 00:58:21.960575 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:21.960589 | orchestrator | 2025-06-02 00:58:21.960601 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-02 00:58:21.960621 | orchestrator | Monday 02 June 2025 00:57:18 +0000 (0:00:01.989) 0:01:40.200 *********** 2025-06-02 00:58:21.960633 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:21.960643 | orchestrator | 2025-06-02 00:58:21.960654 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-02 00:58:21.960665 | orchestrator | Monday 02 June 2025 00:57:44 +0000 (0:00:25.608) 0:02:05.809 *********** 2025-06-02 00:58:21.960677 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:21.960696 | orchestrator | 2025-06-02 00:58:21.960714 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 00:58:21.960732 | orchestrator | Monday 02 June 2025 00:57:46 +0000 (0:00:02.366) 0:02:08.175 *********** 2025-06-02 00:58:21.960750 | orchestrator | 2025-06-02 00:58:21.960769 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 00:58:21.960787 | orchestrator | Monday 02 June 2025 00:57:46 +0000 (0:00:00.063) 0:02:08.238 *********** 2025-06-02 00:58:21.960806 | orchestrator | 2025-06-02 00:58:21.960835 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 00:58:21.960856 | orchestrator | Monday 02 June 2025 00:57:46 +0000 (0:00:00.059) 0:02:08.298 *********** 2025-06-02 00:58:21.960877 | orchestrator | 2025-06-02 00:58:21.960896 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-02 00:58:21.960914 | orchestrator | Monday 02 June 2025 00:57:47 +0000 (0:00:00.078) 0:02:08.376 *********** 2025-06-02 00:58:21.960925 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:21.960936 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:58:21.960947 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:58:21.960957 | orchestrator | 2025-06-02 00:58:21.960969 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:58:21.960981 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 00:58:21.960994 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 00:58:21.961005 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 00:58:21.961016 | orchestrator | 2025-06-02 00:58:21.961027 | orchestrator | 2025-06-02 00:58:21.961037 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:58:21.961048 | orchestrator | Monday 02 June 2025 00:58:21 +0000 (0:00:34.343) 0:02:42.720 *********** 2025-06-02 00:58:21.961059 | orchestrator | =============================================================================== 2025-06-02 00:58:21.961077 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.34s 2025-06-02 00:58:21.961096 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.61s 2025-06-02 00:58:21.961112 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.56s 2025-06-02 00:58:21.961128 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.38s 2025-06-02 00:58:21.961159 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.16s 2025-06-02 00:58:21.961176 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.87s 2025-06-02 00:58:21.961200 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.54s 2025-06-02 00:58:21.961219 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.80s 2025-06-02 00:58:21.961233 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.54s 2025-06-02 00:58:21.961249 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.10s 2025-06-02 00:58:21.961265 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.03s 2025-06-02 00:58:21.961279 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.98s 2025-06-02 00:58:21.961300 | orchestrator | glance : Copying over config.json files for services -------------------- 3.96s 2025-06-02 00:58:21.961323 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.95s 2025-06-02 00:58:21.961338 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.85s 2025-06-02 00:58:21.961355 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.58s 2025-06-02 00:58:21.961370 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.54s 2025-06-02 00:58:21.961385 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.51s 2025-06-02 00:58:21.961408 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.33s 2025-06-02 00:58:21.961423 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.13s 2025-06-02 00:58:21.961439 | orchestrator | 2025-06-02 00:58:21 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:21.961673 | orchestrator | 2025-06-02 00:58:21 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:21.963013 | orchestrator | 2025-06-02 00:58:21 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:21.963452 | orchestrator | 2025-06-02 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:25.017880 | orchestrator | 2025-06-02 00:58:25 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:25.019205 | orchestrator | 2025-06-02 00:58:25 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:58:25.021631 | orchestrator | 2025-06-02 00:58:25 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:25.024628 | orchestrator | 2025-06-02 00:58:25 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:25.024652 | orchestrator | 2025-06-02 00:58:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:28.077098 | orchestrator | 2025-06-02 00:58:28 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:28.077317 | orchestrator | 2025-06-02 00:58:28 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:58:28.078238 | orchestrator | 2025-06-02 00:58:28 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:28.079780 | orchestrator | 2025-06-02 00:58:28 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:28.079804 | orchestrator | 2025-06-02 00:58:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:31.121843 | orchestrator | 2025-06-02 00:58:31 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:31.123423 | orchestrator | 2025-06-02 00:58:31 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:58:31.124960 | orchestrator | 2025-06-02 00:58:31 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:31.126631 | orchestrator | 2025-06-02 00:58:31 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:31.126662 | orchestrator | 2025-06-02 00:58:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:34.162705 | orchestrator | 2025-06-02 00:58:34 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:34.164569 | orchestrator | 2025-06-02 00:58:34 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:58:34.167048 | orchestrator | 2025-06-02 00:58:34 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:34.169337 | orchestrator | 2025-06-02 00:58:34 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:34.169831 | orchestrator | 2025-06-02 00:58:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:37.233921 | orchestrator | 2025-06-02 00:58:37 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:37.238599 | orchestrator | 2025-06-02 00:58:37 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:58:37.243452 | orchestrator | 2025-06-02 00:58:37 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:37.246371 | orchestrator | 2025-06-02 00:58:37 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:37.246640 | orchestrator | 2025-06-02 00:58:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:40.303892 | orchestrator | 2025-06-02 00:58:40 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:40.307006 | orchestrator | 2025-06-02 00:58:40 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:58:40.310812 | orchestrator | 2025-06-02 00:58:40 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state STARTED 2025-06-02 00:58:40.313955 | orchestrator | 2025-06-02 00:58:40 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:40.314096 | orchestrator | 2025-06-02 00:58:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:43.370609 | orchestrator | 2025-06-02 00:58:43 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:43.371810 | orchestrator | 2025-06-02 00:58:43 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:58:43.374808 | orchestrator | 2025-06-02 00:58:43 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:58:43.381043 | orchestrator | 2025-06-02 00:58:43 | INFO  | Task 2b1793f2-fd61-4291-9c4c-e8bfd3160cf8 is in state SUCCESS 2025-06-02 00:58:43.383716 | orchestrator | 2025-06-02 00:58:43.383761 | orchestrator | 2025-06-02 00:58:43.383774 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:58:43.383786 | orchestrator | 2025-06-02 00:58:43.383797 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:58:43.383809 | orchestrator | Monday 02 June 2025 00:55:32 +0000 (0:00:00.276) 0:00:00.276 *********** 2025-06-02 00:58:43.383820 | orchestrator | ok: [testbed-manager] 2025-06-02 00:58:43.383835 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:58:43.383846 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:58:43.383858 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:58:43.383869 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:58:43.383880 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:58:43.383891 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:58:43.383902 | orchestrator | 2025-06-02 00:58:43.383914 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:58:43.383948 | orchestrator | Monday 02 June 2025 00:55:33 +0000 (0:00:00.732) 0:00:01.008 *********** 2025-06-02 00:58:43.383960 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-02 00:58:43.383971 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-02 00:58:43.383982 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-02 00:58:43.383993 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-02 00:58:43.384004 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-02 00:58:43.384014 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-02 00:58:43.384025 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-02 00:58:43.384036 | orchestrator | 2025-06-02 00:58:43.384047 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-02 00:58:43.384058 | orchestrator | 2025-06-02 00:58:43.384069 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 00:58:43.384080 | orchestrator | Monday 02 June 2025 00:55:34 +0000 (0:00:00.587) 0:00:01.595 *********** 2025-06-02 00:58:43.384139 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:58:43.384155 | orchestrator | 2025-06-02 00:58:43.384166 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-02 00:58:43.384177 | orchestrator | Monday 02 June 2025 00:55:35 +0000 (0:00:01.391) 0:00:02.986 *********** 2025-06-02 00:58:43.384193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.384210 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 00:58:43.384222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.384246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.384281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.384294 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.384305 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.384317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.384329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.384341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.384353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.384375 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.384735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.384764 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.384785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.384807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.384827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.384858 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 00:58:43.384906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.384924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.384935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.384946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.384958 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.384969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.384981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.385004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.385023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.385036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.385047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.385058 | orchestrator | 2025-06-02 00:58:43.385070 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 00:58:43.385081 | orchestrator | Monday 02 June 2025 00:55:38 +0000 (0:00:03.298) 0:00:06.285 *********** 2025-06-02 00:58:43.385093 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:58:43.385104 | orchestrator | 2025-06-02 00:58:43.385115 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-02 00:58:43.385126 | orchestrator | Monday 02 June 2025 00:55:40 +0000 (0:00:01.260) 0:00:07.545 *********** 2025-06-02 00:58:43.385137 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 00:58:43.385150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.385172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.385191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.385204 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.385215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.385263 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.385275 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.385286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.385305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.385323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.385427 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.385448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.385469 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.385515 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.385537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.385558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.385587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.385604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.385626 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 00:58:43.385639 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.385651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.385662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.385680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.385696 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.387116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.387154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.387166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.387178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.387189 | orchestrator | 2025-06-02 00:58:43.387201 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-02 00:58:43.387213 | orchestrator | Monday 02 June 2025 00:55:45 +0000 (0:00:05.265) 0:00:12.811 *********** 2025-06-02 00:58:43.387224 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 00:58:43.387250 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.387270 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.387293 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 00:58:43.387306 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.387329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.387376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387388 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:58:43.387401 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:43.387420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.387432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.387475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387521 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:43.387533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.387545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.387592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387603 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:43.387614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.387633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.387645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.387656 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:58:43.387668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.387684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.387703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.387715 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:58:43.387727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.387741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.387760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.387774 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:58:43.387788 | orchestrator | 2025-06-02 00:58:43.387800 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-02 00:58:43.387813 | orchestrator | Monday 02 June 2025 00:55:47 +0000 (0:00:01.968) 0:00:14.780 *********** 2025-06-02 00:58:43.387827 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 00:58:43.387840 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.387859 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.387879 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 00:58:43.387901 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387913 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:58:43.387927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.387940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.387966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.387996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.388010 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:43.388024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.388044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.388057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.388070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.388082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.388093 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:43.388104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.388120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.388138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.388150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.388167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:58:43.388179 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:43.388190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.388202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.388213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.388224 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:58:43.388235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.388257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.388269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.388287 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:58:43.388298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:58:43.388310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.388321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:58:43.388332 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:58:43.388343 | orchestrator | 2025-06-02 00:58:43.388354 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-02 00:58:43.388365 | orchestrator | Monday 02 June 2025 00:55:49 +0000 (0:00:02.073) 0:00:16.853 *********** 2025-06-02 00:58:43.388377 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 00:58:43.388393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.388411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.388429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.388440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.388452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.388463 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.388474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.388505 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.388522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.388546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.388558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.388570 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.388581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.388592 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.388604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.388615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.388638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.388658 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 00:58:43.388671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.388682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.388694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.388705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.388717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.388745 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.388757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.388769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.388780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.388792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.388803 | orchestrator | 2025-06-02 00:58:43.388814 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-02 00:58:43.388825 | orchestrator | Monday 02 June 2025 00:55:55 +0000 (0:00:05.720) 0:00:22.574 *********** 2025-06-02 00:58:43.388836 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:58:43.388847 | orchestrator | 2025-06-02 00:58:43.388858 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-02 00:58:43.388869 | orchestrator | Monday 02 June 2025 00:55:55 +0000 (0:00:00.859) 0:00:23.433 *********** 2025-06-02 00:58:43.388881 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084564, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.388912 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084564, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.388931 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084553, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.388943 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084564, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.388954 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084564, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.388965 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084564, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.388977 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084553, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.388988 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084564, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.389011 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084564, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389028 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084553, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389040 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084535, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389051 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084553, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389063 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084553, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389074 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084535, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389092 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084535, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389108 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084536, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389128 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084553, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389140 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084551, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389151 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084536, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389162 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084536, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389174 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084535, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389192 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084535, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389208 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084535, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389285 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084551, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389299 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084540, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389310 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084553, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.389321 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084551, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389333 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084536, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389351 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084536, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389362 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084540, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389384 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084536, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389396 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084549, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389407 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084540, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389419 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084551, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389430 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084551, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389448 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084549, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389459 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084555, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389480 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084540, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389513 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084551, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389525 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084549, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389537 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084535, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.389555 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084540, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389567 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084555, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389578 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084563, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389600 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084549, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389612 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084540, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389624 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084555, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389635 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084555, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389653 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084549, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389664 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084563, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389676 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084563, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389692 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084575, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389710 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084549, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389722 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084563, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389734 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084555, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389755 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084575, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389767 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084555, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389778 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084575, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389795 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084536, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1136541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.389812 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084558, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.120654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389824 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084558, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.120654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389842 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084563, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389854 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084575, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389866 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084538, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.114654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389877 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084563, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389893 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084538, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.114654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389910 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084575, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389922 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084558, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.120654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389940 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084575, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389952 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084547, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.117654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389963 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084558, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.120654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389975 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084547, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.117654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.389991 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084551, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390009 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084558, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.120654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390073 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084538, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.114654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390103 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084528, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.111654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390123 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084558, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.120654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390141 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084528, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.111654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390160 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084538, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.114654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390188 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084552, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390218 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084538, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.114654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390237 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084552, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390256 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084547, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.117654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390268 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084538, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.114654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390279 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084574, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390291 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084547, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.117654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390307 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084547, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.117654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390323 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084528, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.111654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390341 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084540, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390353 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084547, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.117654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390364 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084544, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390375 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084574, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390387 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084528, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.111654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390403 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084565, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1226542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390420 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084528, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.111654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390438 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:43.390450 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084528, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.111654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390462 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084552, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390473 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084574, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390537 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084544, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390551 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084552, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390563 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084552, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390580 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084552, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390601 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084549, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390612 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084544, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390657 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084574, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390670 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084574, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390681 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084565, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1226542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390692 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:43.390712 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084565, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1226542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390731 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:43.390749 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084574, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390761 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084544, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390773 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084544, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390784 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084565, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1226542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390795 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:58:43.390807 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084544, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390818 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084565, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1226542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390829 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:58:43.390845 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084555, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1196542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390866 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084565, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1226542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:58:43.390877 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:58:43.390887 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084563, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1216543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390897 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084575, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390908 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084558, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.120654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390918 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084538, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.114654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390928 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084547, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.117654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390947 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084528, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.111654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390963 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084552, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1186543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390973 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084574, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1246543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390983 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084544, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1166542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.390993 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084565, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1226542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:58:43.391003 | orchestrator | 2025-06-02 00:58:43.391013 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-02 00:58:43.391023 | orchestrator | Monday 02 June 2025 00:56:19 +0000 (0:00:23.151) 0:00:46.585 *********** 2025-06-02 00:58:43.391033 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:58:43.391043 | orchestrator | 2025-06-02 00:58:43.391053 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-02 00:58:43.391062 | orchestrator | Monday 02 June 2025 00:56:19 +0000 (0:00:00.616) 0:00:47.202 *********** 2025-06-02 00:58:43.391072 | orchestrator | [WARNING]: Skipped 2025-06-02 00:58:43.391083 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391093 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-02 00:58:43.391102 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391118 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-02 00:58:43.391128 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 00:58:43.391138 | orchestrator | [WARNING]: Skipped 2025-06-02 00:58:43.391148 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391158 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-02 00:58:43.391167 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391177 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-02 00:58:43.391186 | orchestrator | [WARNING]: Skipped 2025-06-02 00:58:43.391196 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391205 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-02 00:58:43.391215 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391229 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-02 00:58:43.391239 | orchestrator | [WARNING]: Skipped 2025-06-02 00:58:43.391249 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391259 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-02 00:58:43.391268 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391278 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-02 00:58:43.391287 | orchestrator | [WARNING]: Skipped 2025-06-02 00:58:43.391297 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391312 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-02 00:58:43.391322 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391331 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-02 00:58:43.391341 | orchestrator | [WARNING]: Skipped 2025-06-02 00:58:43.391351 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391360 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-02 00:58:43.391370 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391379 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-02 00:58:43.391389 | orchestrator | [WARNING]: Skipped 2025-06-02 00:58:43.391399 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391408 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-02 00:58:43.391417 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:58:43.391427 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-02 00:58:43.391437 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:58:43.391446 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 00:58:43.391456 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 00:58:43.391466 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 00:58:43.391476 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 00:58:43.391525 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 00:58:43.391537 | orchestrator | 2025-06-02 00:58:43.391547 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-02 00:58:43.391557 | orchestrator | Monday 02 June 2025 00:56:21 +0000 (0:00:02.220) 0:00:49.423 *********** 2025-06-02 00:58:43.391567 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 00:58:43.391577 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:43.391586 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 00:58:43.391596 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:43.391613 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 00:58:43.391623 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:43.391632 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 00:58:43.391640 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:58:43.391648 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 00:58:43.391656 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:58:43.391664 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 00:58:43.391672 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:58:43.391680 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-02 00:58:43.391688 | orchestrator | 2025-06-02 00:58:43.391696 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-02 00:58:43.391704 | orchestrator | Monday 02 June 2025 00:56:37 +0000 (0:00:15.879) 0:01:05.302 *********** 2025-06-02 00:58:43.391712 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 00:58:43.391720 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:43.391728 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 00:58:43.391736 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:43.391744 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 00:58:43.391752 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:43.391760 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 00:58:43.391768 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:58:43.391776 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 00:58:43.391784 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:58:43.391792 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 00:58:43.391800 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:58:43.391808 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-02 00:58:43.391816 | orchestrator | 2025-06-02 00:58:43.391823 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-02 00:58:43.391831 | orchestrator | Monday 02 June 2025 00:56:42 +0000 (0:00:04.380) 0:01:09.683 *********** 2025-06-02 00:58:43.391843 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 00:58:43.391852 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 00:58:43.391860 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-02 00:58:43.391868 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 00:58:43.391881 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:43.391889 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:43.391897 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:43.391905 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 00:58:43.391914 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:58:43.391922 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 00:58:43.391935 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:58:43.391944 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 00:58:43.391952 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:58:43.391960 | orchestrator | 2025-06-02 00:58:43.391968 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-02 00:58:43.391976 | orchestrator | Monday 02 June 2025 00:56:44 +0000 (0:00:01.774) 0:01:11.458 *********** 2025-06-02 00:58:43.391984 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:58:43.391992 | orchestrator | 2025-06-02 00:58:43.392000 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-02 00:58:43.392008 | orchestrator | Monday 02 June 2025 00:56:44 +0000 (0:00:00.640) 0:01:12.099 *********** 2025-06-02 00:58:43.392016 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:58:43.392024 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:43.392032 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:43.392040 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:43.392048 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:58:43.392056 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:58:43.392064 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:58:43.392072 | orchestrator | 2025-06-02 00:58:43.392081 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-02 00:58:43.392088 | orchestrator | Monday 02 June 2025 00:56:45 +0000 (0:00:01.053) 0:01:13.152 *********** 2025-06-02 00:58:43.392096 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:58:43.392104 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:58:43.392113 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:58:43.392120 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:43.392128 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:58:43.392136 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:58:43.392144 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:58:43.392152 | orchestrator | 2025-06-02 00:58:43.392160 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-02 00:58:43.392168 | orchestrator | Monday 02 June 2025 00:56:49 +0000 (0:00:03.349) 0:01:16.502 *********** 2025-06-02 00:58:43.392176 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:58:43.392184 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:58:43.392192 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:58:43.392200 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:43.392208 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:58:43.392216 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:43.392224 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:58:43.392232 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:43.392240 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:58:43.392248 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:58:43.392256 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:58:43.392264 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:58:43.392271 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:58:43.392279 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:58:43.392287 | orchestrator | 2025-06-02 00:58:43.392295 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-02 00:58:43.392303 | orchestrator | Monday 02 June 2025 00:56:50 +0000 (0:00:01.645) 0:01:18.148 *********** 2025-06-02 00:58:43.392311 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 00:58:43.392319 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:43.392332 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-02 00:58:43.392340 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 00:58:43.392348 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:43.392356 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 00:58:43.392364 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:43.392375 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 00:58:43.392384 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:58:43.392391 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 00:58:43.392399 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:58:43.392407 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 00:58:43.392416 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:58:43.392424 | orchestrator | 2025-06-02 00:58:43.392436 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-02 00:58:43.392445 | orchestrator | Monday 02 June 2025 00:56:53 +0000 (0:00:02.520) 0:01:20.668 *********** 2025-06-02 00:58:43.392452 | orchestrator | [WARNING]: Skipped 2025-06-02 00:58:43.392461 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-02 00:58:43.392468 | orchestrator | due to this access issue: 2025-06-02 00:58:43.392476 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-02 00:58:43.392499 | orchestrator | not a directory 2025-06-02 00:58:43.392507 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:58:43.392515 | orchestrator | 2025-06-02 00:58:43.392523 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-02 00:58:43.392531 | orchestrator | Monday 02 June 2025 00:56:55 +0000 (0:00:02.221) 0:01:22.890 *********** 2025-06-02 00:58:43.392539 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:58:43.392547 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:43.392555 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:43.392563 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:43.392571 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:58:43.392579 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:58:43.392587 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:58:43.392595 | orchestrator | 2025-06-02 00:58:43.392603 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-02 00:58:43.392611 | orchestrator | Monday 02 June 2025 00:56:57 +0000 (0:00:02.050) 0:01:24.940 *********** 2025-06-02 00:58:43.392618 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:58:43.392626 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:58:43.392634 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:58:43.392642 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:58:43.392650 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:58:43.392658 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:58:43.392666 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:58:43.392674 | orchestrator | 2025-06-02 00:58:43.392682 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-02 00:58:43.392690 | orchestrator | Monday 02 June 2025 00:56:58 +0000 (0:00:01.427) 0:01:26.367 *********** 2025-06-02 00:58:43.392698 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 00:58:43.392712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.392721 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.392734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.392749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.392758 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.392766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.392774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.392791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.392800 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 00:58:43.392813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.392828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.392836 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:58:43.392845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.392858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.392866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.392874 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.392883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.392895 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.392908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.392917 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.392925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.392939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.392947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.392956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.392967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:58:43.392981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.392990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.392998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:58:43.393011 | orchestrator | 2025-06-02 00:58:43.393019 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-02 00:58:43.393027 | orchestrator | Monday 02 June 2025 00:57:04 +0000 (0:00:05.208) 0:01:31.575 *********** 2025-06-02 00:58:43.393035 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 00:58:43.393043 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:58:43.393051 | orchestrator | 2025-06-02 00:58:43.393059 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:58:43.393067 | orchestrator | Monday 02 June 2025 00:57:05 +0000 (0:00:00.917) 0:01:32.493 *********** 2025-06-02 00:58:43.393075 | orchestrator | 2025-06-02 00:58:43.393083 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:58:43.393090 | orchestrator | Monday 02 June 2025 00:57:05 +0000 (0:00:00.049) 0:01:32.542 *********** 2025-06-02 00:58:43.393098 | orchestrator | 2025-06-02 00:58:43.393106 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:58:43.393115 | orchestrator | Monday 02 June 2025 00:57:05 +0000 (0:00:00.047) 0:01:32.590 *********** 2025-06-02 00:58:43.393122 | orchestrator | 2025-06-02 00:58:43.393130 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:58:43.393138 | orchestrator | Monday 02 June 2025 00:57:05 +0000 (0:00:00.050) 0:01:32.640 *********** 2025-06-02 00:58:43.393146 | orchestrator | 2025-06-02 00:58:43.393154 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:58:43.393162 | orchestrator | Monday 02 June 2025 00:57:05 +0000 (0:00:00.051) 0:01:32.692 *********** 2025-06-02 00:58:43.393169 | orchestrator | 2025-06-02 00:58:43.393177 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:58:43.393185 | orchestrator | Monday 02 June 2025 00:57:05 +0000 (0:00:00.245) 0:01:32.938 *********** 2025-06-02 00:58:43.393193 | orchestrator | 2025-06-02 00:58:43.393201 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:58:43.393209 | orchestrator | Monday 02 June 2025 00:57:05 +0000 (0:00:00.116) 0:01:33.054 *********** 2025-06-02 00:58:43.393216 | orchestrator | 2025-06-02 00:58:43.393224 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-02 00:58:43.393232 | orchestrator | Monday 02 June 2025 00:57:05 +0000 (0:00:00.142) 0:01:33.196 *********** 2025-06-02 00:58:43.393240 | orchestrator | changed: [testbed-manager] 2025-06-02 00:58:43.393248 | orchestrator | 2025-06-02 00:58:43.393256 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-02 00:58:43.393264 | orchestrator | Monday 02 June 2025 00:57:24 +0000 (0:00:18.588) 0:01:51.785 *********** 2025-06-02 00:58:43.393272 | orchestrator | changed: [testbed-manager] 2025-06-02 00:58:43.393280 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:43.393288 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:58:43.393295 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:58:43.393303 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:58:43.393311 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:58:43.393319 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:58:43.393327 | orchestrator | 2025-06-02 00:58:43.393335 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-02 00:58:43.393343 | orchestrator | Monday 02 June 2025 00:57:37 +0000 (0:00:13.332) 0:02:05.118 *********** 2025-06-02 00:58:43.393351 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:43.393359 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:58:43.393367 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:58:43.393375 | orchestrator | 2025-06-02 00:58:43.393383 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-02 00:58:43.393395 | orchestrator | Monday 02 June 2025 00:57:48 +0000 (0:00:10.339) 0:02:15.457 *********** 2025-06-02 00:58:43.393408 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:58:43.393416 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:43.393424 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:58:43.393431 | orchestrator | 2025-06-02 00:58:43.393439 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-02 00:58:43.393447 | orchestrator | Monday 02 June 2025 00:57:54 +0000 (0:00:06.800) 0:02:22.257 *********** 2025-06-02 00:58:43.393455 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:58:43.393464 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:43.393476 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:58:43.393498 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:58:43.393507 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:58:43.393516 | orchestrator | changed: [testbed-manager] 2025-06-02 00:58:43.393523 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:58:43.393532 | orchestrator | 2025-06-02 00:58:43.393540 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-02 00:58:43.393548 | orchestrator | Monday 02 June 2025 00:58:10 +0000 (0:00:15.630) 0:02:37.888 *********** 2025-06-02 00:58:43.393556 | orchestrator | changed: [testbed-manager] 2025-06-02 00:58:43.393563 | orchestrator | 2025-06-02 00:58:43.393571 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-02 00:58:43.393579 | orchestrator | Monday 02 June 2025 00:58:19 +0000 (0:00:09.537) 0:02:47.425 *********** 2025-06-02 00:58:43.393587 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:58:43.393595 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:58:43.393603 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:58:43.393611 | orchestrator | 2025-06-02 00:58:43.393619 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-02 00:58:43.393627 | orchestrator | Monday 02 June 2025 00:58:29 +0000 (0:00:09.615) 0:02:57.041 *********** 2025-06-02 00:58:43.393635 | orchestrator | changed: [testbed-manager] 2025-06-02 00:58:43.393642 | orchestrator | 2025-06-02 00:58:43.393650 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-02 00:58:43.393658 | orchestrator | Monday 02 June 2025 00:58:34 +0000 (0:00:04.767) 0:03:01.809 *********** 2025-06-02 00:58:43.393666 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:58:43.393674 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:58:43.393682 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:58:43.393690 | orchestrator | 2025-06-02 00:58:43.393698 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:58:43.393706 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 00:58:43.393714 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 00:58:43.393723 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 00:58:43.393731 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 00:58:43.393739 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 00:58:43.393747 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 00:58:43.393755 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 00:58:43.393763 | orchestrator | 2025-06-02 00:58:43.393771 | orchestrator | 2025-06-02 00:58:43.393779 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:58:43.393792 | orchestrator | Monday 02 June 2025 00:58:40 +0000 (0:00:06.057) 0:03:07.866 *********** 2025-06-02 00:58:43.393800 | orchestrator | =============================================================================== 2025-06-02 00:58:43.393808 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.15s 2025-06-02 00:58:43.393816 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.59s 2025-06-02 00:58:43.393824 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.88s 2025-06-02 00:58:43.393832 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.63s 2025-06-02 00:58:43.393839 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.33s 2025-06-02 00:58:43.393847 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.34s 2025-06-02 00:58:43.393855 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.62s 2025-06-02 00:58:43.393863 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.54s 2025-06-02 00:58:43.393871 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.80s 2025-06-02 00:58:43.393878 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.06s 2025-06-02 00:58:43.393886 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.72s 2025-06-02 00:58:43.393894 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.27s 2025-06-02 00:58:43.393902 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.21s 2025-06-02 00:58:43.393913 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.77s 2025-06-02 00:58:43.393922 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.38s 2025-06-02 00:58:43.393930 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.35s 2025-06-02 00:58:43.393937 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.30s 2025-06-02 00:58:43.393945 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.52s 2025-06-02 00:58:43.393957 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 2.22s 2025-06-02 00:58:43.393966 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.22s 2025-06-02 00:58:43.393974 | orchestrator | 2025-06-02 00:58:43 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:43.393982 | orchestrator | 2025-06-02 00:58:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:46.446803 | orchestrator | 2025-06-02 00:58:46 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:46.446905 | orchestrator | 2025-06-02 00:58:46 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:58:46.447737 | orchestrator | 2025-06-02 00:58:46 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:58:46.448648 | orchestrator | 2025-06-02 00:58:46 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:46.448670 | orchestrator | 2025-06-02 00:58:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:49.483824 | orchestrator | 2025-06-02 00:58:49 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:49.484851 | orchestrator | 2025-06-02 00:58:49 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:58:49.486526 | orchestrator | 2025-06-02 00:58:49 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:58:49.488348 | orchestrator | 2025-06-02 00:58:49 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:49.488374 | orchestrator | 2025-06-02 00:58:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:52.530159 | orchestrator | 2025-06-02 00:58:52 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:52.531132 | orchestrator | 2025-06-02 00:58:52 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:58:52.532606 | orchestrator | 2025-06-02 00:58:52 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:58:52.533752 | orchestrator | 2025-06-02 00:58:52 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:52.534079 | orchestrator | 2025-06-02 00:58:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:55.576111 | orchestrator | 2025-06-02 00:58:55 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:55.578311 | orchestrator | 2025-06-02 00:58:55 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:58:55.581737 | orchestrator | 2025-06-02 00:58:55 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:58:55.585925 | orchestrator | 2025-06-02 00:58:55 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:55.585969 | orchestrator | 2025-06-02 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:58:58.637082 | orchestrator | 2025-06-02 00:58:58 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:58:58.639390 | orchestrator | 2025-06-02 00:58:58 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:58:58.643749 | orchestrator | 2025-06-02 00:58:58 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:58:58.645875 | orchestrator | 2025-06-02 00:58:58 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:58:58.646195 | orchestrator | 2025-06-02 00:58:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:01.696569 | orchestrator | 2025-06-02 00:59:01 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:01.698759 | orchestrator | 2025-06-02 00:59:01 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:01.699622 | orchestrator | 2025-06-02 00:59:01 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:01.702586 | orchestrator | 2025-06-02 00:59:01 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:59:01.702622 | orchestrator | 2025-06-02 00:59:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:04.746135 | orchestrator | 2025-06-02 00:59:04 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:04.747353 | orchestrator | 2025-06-02 00:59:04 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:04.749896 | orchestrator | 2025-06-02 00:59:04 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:04.750789 | orchestrator | 2025-06-02 00:59:04 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:59:04.751050 | orchestrator | 2025-06-02 00:59:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:07.788135 | orchestrator | 2025-06-02 00:59:07 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:07.789736 | orchestrator | 2025-06-02 00:59:07 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:07.796267 | orchestrator | 2025-06-02 00:59:07 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:07.798351 | orchestrator | 2025-06-02 00:59:07 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:59:07.798450 | orchestrator | 2025-06-02 00:59:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:10.839607 | orchestrator | 2025-06-02 00:59:10 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:10.840806 | orchestrator | 2025-06-02 00:59:10 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:10.841342 | orchestrator | 2025-06-02 00:59:10 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:10.842106 | orchestrator | 2025-06-02 00:59:10 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:59:10.843161 | orchestrator | 2025-06-02 00:59:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:13.907743 | orchestrator | 2025-06-02 00:59:13 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:13.909688 | orchestrator | 2025-06-02 00:59:13 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:13.910286 | orchestrator | 2025-06-02 00:59:13 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:13.910312 | orchestrator | 2025-06-02 00:59:13 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:59:13.910325 | orchestrator | 2025-06-02 00:59:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:16.960519 | orchestrator | 2025-06-02 00:59:16 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:16.962341 | orchestrator | 2025-06-02 00:59:16 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:16.964679 | orchestrator | 2025-06-02 00:59:16 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:16.971146 | orchestrator | 2025-06-02 00:59:16 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:59:16.971633 | orchestrator | 2025-06-02 00:59:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:20.030920 | orchestrator | 2025-06-02 00:59:20 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:20.031547 | orchestrator | 2025-06-02 00:59:20 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:20.032600 | orchestrator | 2025-06-02 00:59:20 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:20.033702 | orchestrator | 2025-06-02 00:59:20 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:59:20.033726 | orchestrator | 2025-06-02 00:59:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:23.067433 | orchestrator | 2025-06-02 00:59:23 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:23.067579 | orchestrator | 2025-06-02 00:59:23 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:23.067869 | orchestrator | 2025-06-02 00:59:23 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:23.068636 | orchestrator | 2025-06-02 00:59:23 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:59:23.068674 | orchestrator | 2025-06-02 00:59:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:26.104514 | orchestrator | 2025-06-02 00:59:26 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:26.104737 | orchestrator | 2025-06-02 00:59:26 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:26.105217 | orchestrator | 2025-06-02 00:59:26 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:26.105782 | orchestrator | 2025-06-02 00:59:26 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:59:26.105804 | orchestrator | 2025-06-02 00:59:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:29.136391 | orchestrator | 2025-06-02 00:59:29 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:29.136540 | orchestrator | 2025-06-02 00:59:29 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:29.136560 | orchestrator | 2025-06-02 00:59:29 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:29.137156 | orchestrator | 2025-06-02 00:59:29 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:59:29.137184 | orchestrator | 2025-06-02 00:59:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:32.161211 | orchestrator | 2025-06-02 00:59:32 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:32.161544 | orchestrator | 2025-06-02 00:59:32 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:32.162403 | orchestrator | 2025-06-02 00:59:32 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:32.162940 | orchestrator | 2025-06-02 00:59:32 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state STARTED 2025-06-02 00:59:32.162964 | orchestrator | 2025-06-02 00:59:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:35.189587 | orchestrator | 2025-06-02 00:59:35 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:35.189671 | orchestrator | 2025-06-02 00:59:35 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:35.189683 | orchestrator | 2025-06-02 00:59:35 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:35.191034 | orchestrator | 2025-06-02 00:59:35 | INFO  | Task 15ecbf31-d81c-4070-9525-6c877002593b is in state SUCCESS 2025-06-02 00:59:35.192265 | orchestrator | 2025-06-02 00:59:35.192291 | orchestrator | 2025-06-02 00:59:35.192302 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:59:35.192312 | orchestrator | 2025-06-02 00:59:35.192322 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:59:35.192332 | orchestrator | Monday 02 June 2025 00:55:52 +0000 (0:00:00.263) 0:00:00.263 *********** 2025-06-02 00:59:35.192342 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:59:35.192355 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:59:35.192365 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:59:35.192375 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:59:35.192385 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:59:35.192394 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:59:35.192404 | orchestrator | 2025-06-02 00:59:35.192414 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:59:35.192423 | orchestrator | Monday 02 June 2025 00:55:53 +0000 (0:00:00.697) 0:00:00.960 *********** 2025-06-02 00:59:35.192433 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-02 00:59:35.192443 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-02 00:59:35.192473 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-02 00:59:35.192484 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-02 00:59:35.192493 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-02 00:59:35.192503 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-02 00:59:35.192532 | orchestrator | 2025-06-02 00:59:35.192543 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-02 00:59:35.192552 | orchestrator | 2025-06-02 00:59:35.192562 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 00:59:35.192572 | orchestrator | Monday 02 June 2025 00:55:53 +0000 (0:00:00.433) 0:00:01.394 *********** 2025-06-02 00:59:35.192582 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:59:35.192593 | orchestrator | 2025-06-02 00:59:35.192603 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-02 00:59:35.192613 | orchestrator | Monday 02 June 2025 00:55:54 +0000 (0:00:01.062) 0:00:02.456 *********** 2025-06-02 00:59:35.192623 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-02 00:59:35.192632 | orchestrator | 2025-06-02 00:59:35.192642 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-02 00:59:35.192651 | orchestrator | Monday 02 June 2025 00:55:57 +0000 (0:00:02.939) 0:00:05.396 *********** 2025-06-02 00:59:35.192673 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-02 00:59:35.192683 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-02 00:59:35.192693 | orchestrator | 2025-06-02 00:59:35.192702 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-02 00:59:35.192712 | orchestrator | Monday 02 June 2025 00:56:04 +0000 (0:00:06.335) 0:00:11.731 *********** 2025-06-02 00:59:35.192722 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 00:59:35.192731 | orchestrator | 2025-06-02 00:59:35.192741 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-02 00:59:35.192750 | orchestrator | Monday 02 June 2025 00:56:07 +0000 (0:00:03.356) 0:00:15.087 *********** 2025-06-02 00:59:35.192760 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 00:59:35.192770 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-02 00:59:35.192779 | orchestrator | 2025-06-02 00:59:35.192789 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-02 00:59:35.192798 | orchestrator | Monday 02 June 2025 00:56:11 +0000 (0:00:03.522) 0:00:18.610 *********** 2025-06-02 00:59:35.192808 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 00:59:35.192818 | orchestrator | 2025-06-02 00:59:35.192827 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-02 00:59:35.192837 | orchestrator | Monday 02 June 2025 00:56:13 +0000 (0:00:02.860) 0:00:21.471 *********** 2025-06-02 00:59:35.192846 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-02 00:59:35.192856 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-02 00:59:35.192865 | orchestrator | 2025-06-02 00:59:35.192875 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-02 00:59:35.193058 | orchestrator | Monday 02 June 2025 00:56:21 +0000 (0:00:07.251) 0:00:28.723 *********** 2025-06-02 00:59:35.193085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.193109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.193122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.193140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.193153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.193166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.193192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.193205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.193222 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.193235 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.193245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.193260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.193277 | orchestrator | 2025-06-02 00:59:35.193287 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 00:59:35.193297 | orchestrator | Monday 02 June 2025 00:56:24 +0000 (0:00:03.035) 0:00:31.758 *********** 2025-06-02 00:59:35.193307 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:59:35.193317 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:59:35.193327 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:59:35.193337 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:59:35.193347 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:59:35.193357 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:59:35.193366 | orchestrator | 2025-06-02 00:59:35.193376 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 00:59:35.193386 | orchestrator | Monday 02 June 2025 00:56:24 +0000 (0:00:00.585) 0:00:32.344 *********** 2025-06-02 00:59:35.193396 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:59:35.193405 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:59:35.193415 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:59:35.193425 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:59:35.193435 | orchestrator | 2025-06-02 00:59:35.193445 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-02 00:59:35.193498 | orchestrator | Monday 02 June 2025 00:56:25 +0000 (0:00:01.031) 0:00:33.375 *********** 2025-06-02 00:59:35.193509 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-02 00:59:35.193519 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-02 00:59:35.193529 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-02 00:59:35.193538 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-02 00:59:35.193548 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-02 00:59:35.193558 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-02 00:59:35.193567 | orchestrator | 2025-06-02 00:59:35.193577 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-02 00:59:35.193630 | orchestrator | Monday 02 June 2025 00:56:27 +0000 (0:00:02.049) 0:00:35.424 *********** 2025-06-02 00:59:35.193647 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 00:59:35.193660 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 00:59:35.193684 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 00:59:35.193695 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 00:59:35.193706 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 00:59:35.193720 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 00:59:35.193731 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 00:59:35.193776 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 00:59:35.193790 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 00:59:35.193805 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 00:59:35.193815 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 00:59:35.193832 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 00:59:35.193842 | orchestrator | 2025-06-02 00:59:35.193852 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-02 00:59:35.193862 | orchestrator | Monday 02 June 2025 00:56:31 +0000 (0:00:03.616) 0:00:39.041 *********** 2025-06-02 00:59:35.193872 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:59:35.193883 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:59:35.193893 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:59:35.193902 | orchestrator | 2025-06-02 00:59:35.193912 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-02 00:59:35.193922 | orchestrator | Monday 02 June 2025 00:56:33 +0000 (0:00:01.767) 0:00:40.808 *********** 2025-06-02 00:59:35.193938 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-02 00:59:35.193949 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-02 00:59:35.193959 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-02 00:59:35.193969 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 00:59:35.193978 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 00:59:35.193988 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 00:59:35.193998 | orchestrator | 2025-06-02 00:59:35.194008 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-02 00:59:35.194058 | orchestrator | Monday 02 June 2025 00:56:36 +0000 (0:00:03.077) 0:00:43.885 *********** 2025-06-02 00:59:35.194071 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-02 00:59:35.194080 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-02 00:59:35.194090 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-02 00:59:35.194100 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-02 00:59:35.194110 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-02 00:59:35.194119 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-02 00:59:35.194129 | orchestrator | 2025-06-02 00:59:35.194139 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-02 00:59:35.194148 | orchestrator | Monday 02 June 2025 00:56:37 +0000 (0:00:00.945) 0:00:44.830 *********** 2025-06-02 00:59:35.194158 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:59:35.194168 | orchestrator | 2025-06-02 00:59:35.194177 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-02 00:59:35.194187 | orchestrator | Monday 02 June 2025 00:56:37 +0000 (0:00:00.162) 0:00:44.993 *********** 2025-06-02 00:59:35.194197 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:59:35.194207 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:59:35.194217 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:59:35.194226 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:59:35.194242 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:59:35.194252 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:59:35.194261 | orchestrator | 2025-06-02 00:59:35.194271 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 00:59:35.194281 | orchestrator | Monday 02 June 2025 00:56:38 +0000 (0:00:00.769) 0:00:45.762 *********** 2025-06-02 00:59:35.194301 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:59:35.194311 | orchestrator | 2025-06-02 00:59:35.194322 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-02 00:59:35.194331 | orchestrator | Monday 02 June 2025 00:56:40 +0000 (0:00:02.013) 0:00:47.775 *********** 2025-06-02 00:59:35.194342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.194352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.194370 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.194381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.194404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.194415 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.194425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.194833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.194910 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.194948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.194973 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.194985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.194998 | orchestrator | 2025-06-02 00:59:35.195011 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-02 00:59:35.195023 | orchestrator | Monday 02 June 2025 00:56:43 +0000 (0:00:02.787) 0:00:50.563 *********** 2025-06-02 00:59:35.195050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:59:35.195064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:59:35.195084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195113 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:59:35.195127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:59:35.195139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195151 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:59:35.195162 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:59:35.195181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195211 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:59:35.195227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195251 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:59:35.195262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195300 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:59:35.195311 | orchestrator | 2025-06-02 00:59:35.195323 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-02 00:59:35.195334 | orchestrator | Monday 02 June 2025 00:56:44 +0000 (0:00:01.377) 0:00:51.941 *********** 2025-06-02 00:59:35.195346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:59:35.195362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195376 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:59:35.195389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:59:35.195403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195416 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:59:35.195435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:59:35.195483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195497 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:59:35.195513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195537 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:59:35.195548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195584 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:59:35.195596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.195623 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:59:35.195634 | orchestrator | 2025-06-02 00:59:35.195646 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-02 00:59:35.195657 | orchestrator | Monday 02 June 2025 00:56:46 +0000 (0:00:02.239) 0:00:54.180 *********** 2025-06-02 00:59:35.195669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.195681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.195704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.195717 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.195733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.195745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.195757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.195785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.195797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.195809 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.195825 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.195837 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.195853 | orchestrator | 2025-06-02 00:59:35.195872 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-02 00:59:35.195901 | orchestrator | Monday 02 June 2025 00:56:50 +0000 (0:00:03.764) 0:00:57.945 *********** 2025-06-02 00:59:35.195920 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 00:59:35.195933 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:59:35.195944 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 00:59:35.195955 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 00:59:35.195966 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:59:35.195977 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 00:59:35.195988 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 00:59:35.196000 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:59:35.196017 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 00:59:35.196029 | orchestrator | 2025-06-02 00:59:35.196040 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-02 00:59:35.196051 | orchestrator | Monday 02 June 2025 00:56:52 +0000 (0:00:02.438) 0:01:00.384 *********** 2025-06-02 00:59:35.196062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.196091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.196109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.196128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196198 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196216 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196240 | orchestrator | 2025-06-02 00:59:35.196251 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-02 00:59:35.196262 | orchestrator | Monday 02 June 2025 00:57:02 +0000 (0:00:09.664) 0:01:10.048 *********** 2025-06-02 00:59:35.196274 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:59:35.196285 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:59:35.196297 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:59:35.196308 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:59:35.196319 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:59:35.196330 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:59:35.196341 | orchestrator | 2025-06-02 00:59:35.196351 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-02 00:59:35.196363 | orchestrator | Monday 02 June 2025 00:57:04 +0000 (0:00:02.438) 0:01:12.487 *********** 2025-06-02 00:59:35.196378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:59:35.196396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.196408 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:59:35.196426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:59:35.196438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.196474 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:59:35.196497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:59:35.196525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.196556 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:59:35.196569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.196581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.196592 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:59:35.196611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.196623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.196634 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:59:35.196650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.196669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:59:35.196680 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:59:35.196692 | orchestrator | 2025-06-02 00:59:35.196703 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-02 00:59:35.196714 | orchestrator | Monday 02 June 2025 00:57:05 +0000 (0:00:00.931) 0:01:13.418 *********** 2025-06-02 00:59:35.196726 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:59:35.196737 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:59:35.196748 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:59:35.196759 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:59:35.196770 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:59:35.196781 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:59:35.196792 | orchestrator | 2025-06-02 00:59:35.196803 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-02 00:59:35.196814 | orchestrator | Monday 02 June 2025 00:57:06 +0000 (0:00:00.709) 0:01:14.128 *********** 2025-06-02 00:59:35.196832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.196845 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.196879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:59:35.196891 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196909 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.196983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.197001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:59:35.197013 | orchestrator | 2025-06-02 00:59:35.197024 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 00:59:35.197035 | orchestrator | Monday 02 June 2025 00:57:08 +0000 (0:00:02.410) 0:01:16.538 *********** 2025-06-02 00:59:35.197046 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:59:35.197063 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:59:35.197075 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:59:35.197086 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:59:35.197097 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:59:35.197108 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:59:35.197119 | orchestrator | 2025-06-02 00:59:35.197130 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-02 00:59:35.197141 | orchestrator | Monday 02 June 2025 00:57:09 +0000 (0:00:00.694) 0:01:17.233 *********** 2025-06-02 00:59:35.197152 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:59:35.197163 | orchestrator | 2025-06-02 00:59:35.197174 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-02 00:59:35.197185 | orchestrator | Monday 02 June 2025 00:57:11 +0000 (0:00:01.765) 0:01:18.999 *********** 2025-06-02 00:59:35.197196 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:59:35.197207 | orchestrator | 2025-06-02 00:59:35.197218 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-02 00:59:35.197229 | orchestrator | Monday 02 June 2025 00:57:13 +0000 (0:00:02.057) 0:01:21.056 *********** 2025-06-02 00:59:35.197240 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:59:35.197251 | orchestrator | 2025-06-02 00:59:35.197262 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 00:59:35.197273 | orchestrator | Monday 02 June 2025 00:57:31 +0000 (0:00:17.678) 0:01:38.735 *********** 2025-06-02 00:59:35.197284 | orchestrator | 2025-06-02 00:59:35.197299 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 00:59:35.197311 | orchestrator | Monday 02 June 2025 00:57:31 +0000 (0:00:00.057) 0:01:38.792 *********** 2025-06-02 00:59:35.197322 | orchestrator | 2025-06-02 00:59:35.197333 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 00:59:35.197344 | orchestrator | Monday 02 June 2025 00:57:31 +0000 (0:00:00.057) 0:01:38.850 *********** 2025-06-02 00:59:35.197354 | orchestrator | 2025-06-02 00:59:35.197365 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 00:59:35.197377 | orchestrator | Monday 02 June 2025 00:57:31 +0000 (0:00:00.056) 0:01:38.907 *********** 2025-06-02 00:59:35.197387 | orchestrator | 2025-06-02 00:59:35.197398 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 00:59:35.197409 | orchestrator | Monday 02 June 2025 00:57:31 +0000 (0:00:00.057) 0:01:38.964 *********** 2025-06-02 00:59:35.197420 | orchestrator | 2025-06-02 00:59:35.197431 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 00:59:35.197442 | orchestrator | Monday 02 June 2025 00:57:31 +0000 (0:00:00.055) 0:01:39.019 *********** 2025-06-02 00:59:35.197500 | orchestrator | 2025-06-02 00:59:35.197514 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-02 00:59:35.197525 | orchestrator | Monday 02 June 2025 00:57:31 +0000 (0:00:00.058) 0:01:39.078 *********** 2025-06-02 00:59:35.197536 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:59:35.197547 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:59:35.197558 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:59:35.197569 | orchestrator | 2025-06-02 00:59:35.197580 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-02 00:59:35.197591 | orchestrator | Monday 02 June 2025 00:57:54 +0000 (0:00:23.261) 0:02:02.339 *********** 2025-06-02 00:59:35.197602 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:59:35.197612 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:59:35.197623 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:59:35.197634 | orchestrator | 2025-06-02 00:59:35.197645 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-02 00:59:35.197656 | orchestrator | Monday 02 June 2025 00:58:05 +0000 (0:00:10.662) 0:02:13.001 *********** 2025-06-02 00:59:35.197667 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:59:35.197678 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:59:35.197697 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:59:35.197708 | orchestrator | 2025-06-02 00:59:35.197719 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-02 00:59:35.197730 | orchestrator | Monday 02 June 2025 00:59:20 +0000 (0:01:15.417) 0:03:28.419 *********** 2025-06-02 00:59:35.197741 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:59:35.197752 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:59:35.197763 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:59:35.197774 | orchestrator | 2025-06-02 00:59:35.197785 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-02 00:59:35.197796 | orchestrator | Monday 02 June 2025 00:59:33 +0000 (0:00:12.819) 0:03:41.238 *********** 2025-06-02 00:59:35.197807 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:59:35.197818 | orchestrator | 2025-06-02 00:59:35.197829 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:59:35.197897 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 00:59:35.197912 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 00:59:35.197923 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 00:59:35.197934 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 00:59:35.197945 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 00:59:35.197955 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 00:59:35.197966 | orchestrator | 2025-06-02 00:59:35.197977 | orchestrator | 2025-06-02 00:59:35.197988 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:59:35.197999 | orchestrator | Monday 02 June 2025 00:59:34 +0000 (0:00:00.491) 0:03:41.730 *********** 2025-06-02 00:59:35.198010 | orchestrator | =============================================================================== 2025-06-02 00:59:35.198066 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 75.42s 2025-06-02 00:59:35.198078 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.26s 2025-06-02 00:59:35.198089 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.68s 2025-06-02 00:59:35.198100 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.82s 2025-06-02 00:59:35.198111 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.66s 2025-06-02 00:59:35.198122 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.66s 2025-06-02 00:59:35.198133 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.25s 2025-06-02 00:59:35.198144 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.34s 2025-06-02 00:59:35.198160 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.76s 2025-06-02 00:59:35.198172 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.62s 2025-06-02 00:59:35.198183 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.52s 2025-06-02 00:59:35.198194 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.36s 2025-06-02 00:59:35.198205 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.08s 2025-06-02 00:59:35.198215 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.04s 2025-06-02 00:59:35.198226 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.94s 2025-06-02 00:59:35.198244 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.86s 2025-06-02 00:59:35.198255 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.79s 2025-06-02 00:59:35.198266 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.44s 2025-06-02 00:59:35.198277 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.44s 2025-06-02 00:59:35.198288 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.41s 2025-06-02 00:59:35.198300 | orchestrator | 2025-06-02 00:59:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:38.216526 | orchestrator | 2025-06-02 00:59:38 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:38.216615 | orchestrator | 2025-06-02 00:59:38 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 00:59:38.216630 | orchestrator | 2025-06-02 00:59:38 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:38.216914 | orchestrator | 2025-06-02 00:59:38 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:38.216936 | orchestrator | 2025-06-02 00:59:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:41.266493 | orchestrator | 2025-06-02 00:59:41 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:41.266798 | orchestrator | 2025-06-02 00:59:41 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 00:59:41.267379 | orchestrator | 2025-06-02 00:59:41 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:41.268102 | orchestrator | 2025-06-02 00:59:41 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:41.268127 | orchestrator | 2025-06-02 00:59:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:44.293700 | orchestrator | 2025-06-02 00:59:44 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:44.293983 | orchestrator | 2025-06-02 00:59:44 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 00:59:44.294235 | orchestrator | 2025-06-02 00:59:44 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:44.295001 | orchestrator | 2025-06-02 00:59:44 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:44.295041 | orchestrator | 2025-06-02 00:59:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:47.327511 | orchestrator | 2025-06-02 00:59:47 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:47.328135 | orchestrator | 2025-06-02 00:59:47 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 00:59:47.328568 | orchestrator | 2025-06-02 00:59:47 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:47.329345 | orchestrator | 2025-06-02 00:59:47 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:47.329503 | orchestrator | 2025-06-02 00:59:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:50.351942 | orchestrator | 2025-06-02 00:59:50 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:50.352028 | orchestrator | 2025-06-02 00:59:50 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 00:59:50.352733 | orchestrator | 2025-06-02 00:59:50 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:50.353371 | orchestrator | 2025-06-02 00:59:50 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:50.353799 | orchestrator | 2025-06-02 00:59:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:53.381430 | orchestrator | 2025-06-02 00:59:53 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:53.382757 | orchestrator | 2025-06-02 00:59:53 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 00:59:53.384486 | orchestrator | 2025-06-02 00:59:53 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:53.385876 | orchestrator | 2025-06-02 00:59:53 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:53.386144 | orchestrator | 2025-06-02 00:59:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:56.404568 | orchestrator | 2025-06-02 00:59:56 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:56.404949 | orchestrator | 2025-06-02 00:59:56 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 00:59:56.405668 | orchestrator | 2025-06-02 00:59:56 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:56.406159 | orchestrator | 2025-06-02 00:59:56 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:56.406393 | orchestrator | 2025-06-02 00:59:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:59:59.432742 | orchestrator | 2025-06-02 00:59:59 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 00:59:59.432831 | orchestrator | 2025-06-02 00:59:59 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 00:59:59.434353 | orchestrator | 2025-06-02 00:59:59 | INFO  | Task 7c856a79-a2cc-46e6-8ac2-6a46f62a49e4 is in state STARTED 2025-06-02 00:59:59.438890 | orchestrator | 2025-06-02 00:59:59 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 00:59:59.439226 | orchestrator | 2025-06-02 00:59:59 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 00:59:59.439303 | orchestrator | 2025-06-02 00:59:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:02.477775 | orchestrator | 2025-06-02 01:00:02 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:02.478863 | orchestrator | 2025-06-02 01:00:02 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:02.480292 | orchestrator | 2025-06-02 01:00:02 | INFO  | Task 7c856a79-a2cc-46e6-8ac2-6a46f62a49e4 is in state STARTED 2025-06-02 01:00:02.482105 | orchestrator | 2025-06-02 01:00:02 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:02.483752 | orchestrator | 2025-06-02 01:00:02 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:02.483785 | orchestrator | 2025-06-02 01:00:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:05.513502 | orchestrator | 2025-06-02 01:00:05 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:05.513655 | orchestrator | 2025-06-02 01:00:05 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:05.513744 | orchestrator | 2025-06-02 01:00:05 | INFO  | Task 7c856a79-a2cc-46e6-8ac2-6a46f62a49e4 is in state STARTED 2025-06-02 01:00:05.514103 | orchestrator | 2025-06-02 01:00:05 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:05.514889 | orchestrator | 2025-06-02 01:00:05 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:05.514939 | orchestrator | 2025-06-02 01:00:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:08.550967 | orchestrator | 2025-06-02 01:00:08 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:08.551067 | orchestrator | 2025-06-02 01:00:08 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:08.552600 | orchestrator | 2025-06-02 01:00:08 | INFO  | Task 7c856a79-a2cc-46e6-8ac2-6a46f62a49e4 is in state STARTED 2025-06-02 01:00:08.553191 | orchestrator | 2025-06-02 01:00:08 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:08.559748 | orchestrator | 2025-06-02 01:00:08 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:08.559790 | orchestrator | 2025-06-02 01:00:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:11.594537 | orchestrator | 2025-06-02 01:00:11 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:11.595638 | orchestrator | 2025-06-02 01:00:11 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:11.596137 | orchestrator | 2025-06-02 01:00:11 | INFO  | Task 7c856a79-a2cc-46e6-8ac2-6a46f62a49e4 is in state STARTED 2025-06-02 01:00:11.599257 | orchestrator | 2025-06-02 01:00:11 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:11.602947 | orchestrator | 2025-06-02 01:00:11 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:11.603015 | orchestrator | 2025-06-02 01:00:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:14.629860 | orchestrator | 2025-06-02 01:00:14 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:14.630385 | orchestrator | 2025-06-02 01:00:14 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:14.631007 | orchestrator | 2025-06-02 01:00:14 | INFO  | Task 7c856a79-a2cc-46e6-8ac2-6a46f62a49e4 is in state STARTED 2025-06-02 01:00:14.632987 | orchestrator | 2025-06-02 01:00:14 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:14.634107 | orchestrator | 2025-06-02 01:00:14 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:14.634186 | orchestrator | 2025-06-02 01:00:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:17.663321 | orchestrator | 2025-06-02 01:00:17 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:17.664595 | orchestrator | 2025-06-02 01:00:17 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:17.665265 | orchestrator | 2025-06-02 01:00:17 | INFO  | Task 7c856a79-a2cc-46e6-8ac2-6a46f62a49e4 is in state SUCCESS 2025-06-02 01:00:17.668416 | orchestrator | 2025-06-02 01:00:17 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:17.673509 | orchestrator | 2025-06-02 01:00:17 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:17.673545 | orchestrator | 2025-06-02 01:00:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:20.699371 | orchestrator | 2025-06-02 01:00:20 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:20.699523 | orchestrator | 2025-06-02 01:00:20 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:20.699546 | orchestrator | 2025-06-02 01:00:20 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:20.699688 | orchestrator | 2025-06-02 01:00:20 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:20.699706 | orchestrator | 2025-06-02 01:00:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:23.726238 | orchestrator | 2025-06-02 01:00:23 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:23.726352 | orchestrator | 2025-06-02 01:00:23 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:23.726615 | orchestrator | 2025-06-02 01:00:23 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:23.727190 | orchestrator | 2025-06-02 01:00:23 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:23.727309 | orchestrator | 2025-06-02 01:00:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:26.750709 | orchestrator | 2025-06-02 01:00:26 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:26.750909 | orchestrator | 2025-06-02 01:00:26 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:26.751563 | orchestrator | 2025-06-02 01:00:26 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:26.752095 | orchestrator | 2025-06-02 01:00:26 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:26.752117 | orchestrator | 2025-06-02 01:00:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:29.780230 | orchestrator | 2025-06-02 01:00:29 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:29.780393 | orchestrator | 2025-06-02 01:00:29 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:29.781013 | orchestrator | 2025-06-02 01:00:29 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:29.781484 | orchestrator | 2025-06-02 01:00:29 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:29.781508 | orchestrator | 2025-06-02 01:00:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:32.804695 | orchestrator | 2025-06-02 01:00:32 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:32.804839 | orchestrator | 2025-06-02 01:00:32 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:32.805298 | orchestrator | 2025-06-02 01:00:32 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:32.805925 | orchestrator | 2025-06-02 01:00:32 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:32.805957 | orchestrator | 2025-06-02 01:00:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:35.830786 | orchestrator | 2025-06-02 01:00:35 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:35.830900 | orchestrator | 2025-06-02 01:00:35 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:35.831337 | orchestrator | 2025-06-02 01:00:35 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:35.831956 | orchestrator | 2025-06-02 01:00:35 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:35.831979 | orchestrator | 2025-06-02 01:00:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:38.856163 | orchestrator | 2025-06-02 01:00:38 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:38.856347 | orchestrator | 2025-06-02 01:00:38 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:38.856825 | orchestrator | 2025-06-02 01:00:38 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:38.858376 | orchestrator | 2025-06-02 01:00:38 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:38.858428 | orchestrator | 2025-06-02 01:00:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:41.887230 | orchestrator | 2025-06-02 01:00:41 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:41.887489 | orchestrator | 2025-06-02 01:00:41 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:41.888669 | orchestrator | 2025-06-02 01:00:41 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:41.889165 | orchestrator | 2025-06-02 01:00:41 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:41.889187 | orchestrator | 2025-06-02 01:00:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:44.910370 | orchestrator | 2025-06-02 01:00:44 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:44.911297 | orchestrator | 2025-06-02 01:00:44 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:44.916012 | orchestrator | 2025-06-02 01:00:44 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:44.916043 | orchestrator | 2025-06-02 01:00:44 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state STARTED 2025-06-02 01:00:44.916058 | orchestrator | 2025-06-02 01:00:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:47.950170 | orchestrator | 2025-06-02 01:00:47 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:47.950558 | orchestrator | 2025-06-02 01:00:47 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:47.951245 | orchestrator | 2025-06-02 01:00:47 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:00:47.951746 | orchestrator | 2025-06-02 01:00:47 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:47.955084 | orchestrator | 2025-06-02 01:00:47 | INFO  | Task 3456ec18-76bc-4d67-a21a-edbb993c9d35 is in state SUCCESS 2025-06-02 01:00:47.956652 | orchestrator | 2025-06-02 01:00:47.956685 | orchestrator | None 2025-06-02 01:00:47.956696 | orchestrator | 2025-06-02 01:00:47.956706 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 01:00:47.956716 | orchestrator | 2025-06-02 01:00:47.956726 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 01:00:47.956736 | orchestrator | Monday 02 June 2025 00:58:44 +0000 (0:00:00.247) 0:00:00.247 *********** 2025-06-02 01:00:47.956746 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:00:47.956758 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:00:47.956768 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:00:47.956778 | orchestrator | 2025-06-02 01:00:47.956788 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 01:00:47.956798 | orchestrator | Monday 02 June 2025 00:58:45 +0000 (0:00:00.284) 0:00:00.531 *********** 2025-06-02 01:00:47.956808 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-02 01:00:47.956818 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-02 01:00:47.956841 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-02 01:00:47.956852 | orchestrator | 2025-06-02 01:00:47.956861 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-02 01:00:47.956871 | orchestrator | 2025-06-02 01:00:47.956881 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 01:00:47.956910 | orchestrator | Monday 02 June 2025 00:58:45 +0000 (0:00:00.378) 0:00:00.910 *********** 2025-06-02 01:00:47.956920 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:00:47.956931 | orchestrator | 2025-06-02 01:00:47.956940 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-02 01:00:47.956950 | orchestrator | Monday 02 June 2025 00:58:45 +0000 (0:00:00.501) 0:00:01.411 *********** 2025-06-02 01:00:47.956960 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-02 01:00:47.956970 | orchestrator | 2025-06-02 01:00:47.956979 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-02 01:00:47.956988 | orchestrator | Monday 02 June 2025 00:58:49 +0000 (0:00:03.167) 0:00:04.579 *********** 2025-06-02 01:00:47.956998 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-02 01:00:47.957008 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-02 01:00:47.957017 | orchestrator | 2025-06-02 01:00:47.957027 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-02 01:00:47.957120 | orchestrator | Monday 02 June 2025 00:58:55 +0000 (0:00:06.171) 0:00:10.751 *********** 2025-06-02 01:00:47.957132 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 01:00:47.957141 | orchestrator | 2025-06-02 01:00:47.957151 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-02 01:00:47.957183 | orchestrator | Monday 02 June 2025 00:58:58 +0000 (0:00:02.992) 0:00:13.743 *********** 2025-06-02 01:00:47.957194 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 01:00:47.957204 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-02 01:00:47.957214 | orchestrator | 2025-06-02 01:00:47.957224 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-02 01:00:47.957233 | orchestrator | Monday 02 June 2025 00:59:01 +0000 (0:00:03.673) 0:00:17.416 *********** 2025-06-02 01:00:47.957243 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 01:00:47.957253 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-02 01:00:47.957264 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-02 01:00:47.957275 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-02 01:00:47.957286 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-02 01:00:47.957298 | orchestrator | 2025-06-02 01:00:47.957309 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-02 01:00:47.957321 | orchestrator | Monday 02 June 2025 00:59:16 +0000 (0:00:14.826) 0:00:32.243 *********** 2025-06-02 01:00:47.957332 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-02 01:00:47.957343 | orchestrator | 2025-06-02 01:00:47.957354 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-02 01:00:47.957367 | orchestrator | Monday 02 June 2025 00:59:20 +0000 (0:00:04.001) 0:00:36.245 *********** 2025-06-02 01:00:47.957381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.957449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.957464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.957476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.957491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.957502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.957527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.957544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.957557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.957569 | orchestrator | 2025-06-02 01:00:47.957581 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-02 01:00:47.957592 | orchestrator | Monday 02 June 2025 00:59:23 +0000 (0:00:02.396) 0:00:38.642 *********** 2025-06-02 01:00:47.957604 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-02 01:00:47.957616 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-02 01:00:47.957626 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-02 01:00:47.957635 | orchestrator | 2025-06-02 01:00:47.957645 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-02 01:00:47.957655 | orchestrator | Monday 02 June 2025 00:59:24 +0000 (0:00:01.692) 0:00:40.334 *********** 2025-06-02 01:00:47.957665 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:00:47.957675 | orchestrator | 2025-06-02 01:00:47.957685 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-02 01:00:47.957694 | orchestrator | Monday 02 June 2025 00:59:25 +0000 (0:00:00.251) 0:00:40.586 *********** 2025-06-02 01:00:47.957704 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:00:47.957714 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:00:47.957724 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:00:47.957733 | orchestrator | 2025-06-02 01:00:47.957743 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 01:00:47.957753 | orchestrator | Monday 02 June 2025 00:59:25 +0000 (0:00:00.641) 0:00:41.227 *********** 2025-06-02 01:00:47.957763 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:00:47.957773 | orchestrator | 2025-06-02 01:00:47.957783 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-02 01:00:47.957792 | orchestrator | Monday 02 June 2025 00:59:26 +0000 (0:00:00.863) 0:00:42.091 *********** 2025-06-02 01:00:47.957803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.957825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.957840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.957851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.957861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.957877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.957893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.957903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.957918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.957928 | orchestrator | 2025-06-02 01:00:47.957938 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-02 01:00:47.957948 | orchestrator | Monday 02 June 2025 00:59:30 +0000 (0:00:03.761) 0:00:45.852 *********** 2025-06-02 01:00:47.957958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 01:00:47.957969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.957985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.957995 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:00:47.958012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 01:00:47.958069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.958081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.958091 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:00:47.958102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 01:00:47.958118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.958128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.958139 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:00:47.958149 | orchestrator | 2025-06-02 01:00:47.958164 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-02 01:00:47.958175 | orchestrator | Monday 02 June 2025 00:59:31 +0000 (0:00:01.413) 0:00:47.266 *********** 2025-06-02 01:00:47.958189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 01:00:47.958200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.958210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.958230 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:00:47.958241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 01:00:47.958251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.958267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.958278 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:00:47.958292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 01:00:47.958302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.958318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.958328 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:00:47.958338 | orchestrator | 2025-06-02 01:00:47.958348 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-02 01:00:47.958358 | orchestrator | Monday 02 June 2025 00:59:33 +0000 (0:00:01.281) 0:00:48.547 *********** 2025-06-02 01:00:47.958368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.958393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbic2025-06-02 01:00:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:47.958423 | orchestrator | an-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.958435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.958451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.958461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.958471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.958488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.958503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.958514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.958529 | orchestrator | 2025-06-02 01:00:47.958539 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-02 01:00:47.958549 | orchestrator | Monday 02 June 2025 00:59:36 +0000 (0:00:03.289) 0:00:51.837 *********** 2025-06-02 01:00:47.958559 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:00:47.958569 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:00:47.958579 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:00:47.958589 | orchestrator | 2025-06-02 01:00:47.958598 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-02 01:00:47.958608 | orchestrator | Monday 02 June 2025 00:59:38 +0000 (0:00:02.590) 0:00:54.428 *********** 2025-06-02 01:00:47.958618 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 01:00:47.958628 | orchestrator | 2025-06-02 01:00:47.958637 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-02 01:00:47.958647 | orchestrator | Monday 02 June 2025 00:59:40 +0000 (0:00:01.643) 0:00:56.071 *********** 2025-06-02 01:00:47.958657 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:00:47.958670 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:00:47.958690 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:00:47.958708 | orchestrator | 2025-06-02 01:00:47.958723 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-02 01:00:47.958739 | orchestrator | Monday 02 June 2025 00:59:41 +0000 (0:00:00.514) 0:00:56.586 *********** 2025-06-02 01:00:47.958756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.958784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.958808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.958862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.958874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.958884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.958894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.958911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.958926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.958942 | orchestrator | 2025-06-02 01:00:47.958952 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-02 01:00:47.958962 | orchestrator | Monday 02 June 2025 00:59:48 +0000 (0:00:06.963) 0:01:03.549 *********** 2025-06-02 01:00:47.958972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 01:00:47.958982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.958992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.959002 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:00:47.959017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 01:00:47.959033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.959049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.959059 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:00:47.959069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 01:00:47.959079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.959089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:00:47.959099 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:00:47.959109 | orchestrator | 2025-06-02 01:00:47.959119 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-02 01:00:47.959129 | orchestrator | Monday 02 June 2025 00:59:49 +0000 (0:00:01.205) 0:01:04.754 *********** 2025-06-02 01:00:47.959149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.959165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.959175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 01:00:47.959186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.959196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.959229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.959277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.959290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.959301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:00:47.959312 | orchestrator | 2025-06-02 01:00:47.959323 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 01:00:47.959334 | orchestrator | Monday 02 June 2025 00:59:52 +0000 (0:00:02.781) 0:01:07.535 *********** 2025-06-02 01:00:47.959345 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:00:47.959356 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:00:47.959367 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:00:47.959378 | orchestrator | 2025-06-02 01:00:47.959389 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-02 01:00:47.959453 | orchestrator | Monday 02 June 2025 00:59:52 +0000 (0:00:00.508) 0:01:08.044 *********** 2025-06-02 01:00:47.959467 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:00:47.959478 | orchestrator | 2025-06-02 01:00:47.959489 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-02 01:00:47.959500 | orchestrator | Monday 02 June 2025 00:59:54 +0000 (0:00:02.203) 0:01:10.247 *********** 2025-06-02 01:00:47.959511 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:00:47.959521 | orchestrator | 2025-06-02 01:00:47.959532 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-02 01:00:47.959543 | orchestrator | Monday 02 June 2025 00:59:57 +0000 (0:00:02.393) 0:01:12.641 *********** 2025-06-02 01:00:47.959554 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:00:47.959565 | orchestrator | 2025-06-02 01:00:47.959575 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 01:00:47.959586 | orchestrator | Monday 02 June 2025 01:00:07 +0000 (0:00:10.668) 0:01:23.309 *********** 2025-06-02 01:00:47.959604 | orchestrator | 2025-06-02 01:00:47.959615 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 01:00:47.959625 | orchestrator | Monday 02 June 2025 01:00:07 +0000 (0:00:00.096) 0:01:23.406 *********** 2025-06-02 01:00:47.959636 | orchestrator | 2025-06-02 01:00:47.959647 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 01:00:47.959658 | orchestrator | Monday 02 June 2025 01:00:08 +0000 (0:00:00.092) 0:01:23.498 *********** 2025-06-02 01:00:47.959669 | orchestrator | 2025-06-02 01:00:47.959680 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-02 01:00:47.959691 | orchestrator | Monday 02 June 2025 01:00:08 +0000 (0:00:00.055) 0:01:23.554 *********** 2025-06-02 01:00:47.959701 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:00:47.959712 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:00:47.959723 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:00:47.959734 | orchestrator | 2025-06-02 01:00:47.959745 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-02 01:00:47.959762 | orchestrator | Monday 02 June 2025 01:00:21 +0000 (0:00:13.039) 0:01:36.593 *********** 2025-06-02 01:00:47.959774 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:00:47.959785 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:00:47.959796 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:00:47.959807 | orchestrator | 2025-06-02 01:00:47.959818 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-02 01:00:47.959829 | orchestrator | Monday 02 June 2025 01:00:32 +0000 (0:00:11.112) 0:01:47.706 *********** 2025-06-02 01:00:47.959839 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:00:47.959850 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:00:47.959861 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:00:47.959872 | orchestrator | 2025-06-02 01:00:47.959883 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:00:47.959894 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 01:00:47.959911 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 01:00:47.959922 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 01:00:47.959932 | orchestrator | 2025-06-02 01:00:47.959941 | orchestrator | 2025-06-02 01:00:47.959951 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:00:47.959960 | orchestrator | Monday 02 June 2025 01:00:44 +0000 (0:00:12.403) 0:02:00.110 *********** 2025-06-02 01:00:47.959970 | orchestrator | =============================================================================== 2025-06-02 01:00:47.959979 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.83s 2025-06-02 01:00:47.959989 | orchestrator | barbican : Restart barbican-api container ------------------------------ 13.04s 2025-06-02 01:00:47.959999 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.40s 2025-06-02 01:00:47.960008 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.11s 2025-06-02 01:00:47.960018 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.67s 2025-06-02 01:00:47.960027 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.96s 2025-06-02 01:00:47.960037 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.17s 2025-06-02 01:00:47.960046 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.00s 2025-06-02 01:00:47.960056 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.76s 2025-06-02 01:00:47.960065 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.67s 2025-06-02 01:00:47.960075 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.29s 2025-06-02 01:00:47.960089 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.17s 2025-06-02 01:00:47.960099 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 2.99s 2025-06-02 01:00:47.960109 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.78s 2025-06-02 01:00:47.960118 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.59s 2025-06-02 01:00:47.960128 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.40s 2025-06-02 01:00:47.960137 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.39s 2025-06-02 01:00:47.960147 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.20s 2025-06-02 01:00:47.960156 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.69s 2025-06-02 01:00:47.960166 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.64s 2025-06-02 01:00:50.974813 | orchestrator | 2025-06-02 01:00:50 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:50.975050 | orchestrator | 2025-06-02 01:00:50 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:50.975844 | orchestrator | 2025-06-02 01:00:50 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:00:50.976650 | orchestrator | 2025-06-02 01:00:50 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:50.976670 | orchestrator | 2025-06-02 01:00:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:54.009676 | orchestrator | 2025-06-02 01:00:54 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:54.009767 | orchestrator | 2025-06-02 01:00:54 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:54.009782 | orchestrator | 2025-06-02 01:00:54 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:00:54.010064 | orchestrator | 2025-06-02 01:00:54 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:54.010090 | orchestrator | 2025-06-02 01:00:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:00:57.037870 | orchestrator | 2025-06-02 01:00:57 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:00:57.039246 | orchestrator | 2025-06-02 01:00:57 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:00:57.040181 | orchestrator | 2025-06-02 01:00:57 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:00:57.041043 | orchestrator | 2025-06-02 01:00:57 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:00:57.041074 | orchestrator | 2025-06-02 01:00:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:00.070711 | orchestrator | 2025-06-02 01:01:00 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:00.071130 | orchestrator | 2025-06-02 01:01:00 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:00.074001 | orchestrator | 2025-06-02 01:01:00 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:01:00.075062 | orchestrator | 2025-06-02 01:01:00 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:00.075101 | orchestrator | 2025-06-02 01:01:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:03.125029 | orchestrator | 2025-06-02 01:01:03 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:03.125545 | orchestrator | 2025-06-02 01:01:03 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:03.126982 | orchestrator | 2025-06-02 01:01:03 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:01:03.129092 | orchestrator | 2025-06-02 01:01:03 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:03.129192 | orchestrator | 2025-06-02 01:01:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:06.184903 | orchestrator | 2025-06-02 01:01:06 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:06.187453 | orchestrator | 2025-06-02 01:01:06 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:06.189708 | orchestrator | 2025-06-02 01:01:06 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:01:06.192036 | orchestrator | 2025-06-02 01:01:06 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:06.192279 | orchestrator | 2025-06-02 01:01:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:09.237473 | orchestrator | 2025-06-02 01:01:09 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:09.237860 | orchestrator | 2025-06-02 01:01:09 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:09.241368 | orchestrator | 2025-06-02 01:01:09 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:01:09.242737 | orchestrator | 2025-06-02 01:01:09 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:09.242769 | orchestrator | 2025-06-02 01:01:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:12.274127 | orchestrator | 2025-06-02 01:01:12 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:12.275237 | orchestrator | 2025-06-02 01:01:12 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:12.276920 | orchestrator | 2025-06-02 01:01:12 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:01:12.280697 | orchestrator | 2025-06-02 01:01:12 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:12.280722 | orchestrator | 2025-06-02 01:01:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:15.318435 | orchestrator | 2025-06-02 01:01:15 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:15.322494 | orchestrator | 2025-06-02 01:01:15 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:15.323797 | orchestrator | 2025-06-02 01:01:15 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:01:15.325672 | orchestrator | 2025-06-02 01:01:15 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:15.325768 | orchestrator | 2025-06-02 01:01:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:18.366533 | orchestrator | 2025-06-02 01:01:18 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:18.367366 | orchestrator | 2025-06-02 01:01:18 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:18.369928 | orchestrator | 2025-06-02 01:01:18 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:01:18.376222 | orchestrator | 2025-06-02 01:01:18 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:18.376266 | orchestrator | 2025-06-02 01:01:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:21.412863 | orchestrator | 2025-06-02 01:01:21 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:21.413116 | orchestrator | 2025-06-02 01:01:21 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:21.413902 | orchestrator | 2025-06-02 01:01:21 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:01:21.414715 | orchestrator | 2025-06-02 01:01:21 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:21.414739 | orchestrator | 2025-06-02 01:01:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:24.451176 | orchestrator | 2025-06-02 01:01:24 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:24.452444 | orchestrator | 2025-06-02 01:01:24 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:24.453556 | orchestrator | 2025-06-02 01:01:24 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state STARTED 2025-06-02 01:01:24.454964 | orchestrator | 2025-06-02 01:01:24 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:24.455006 | orchestrator | 2025-06-02 01:01:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:27.494350 | orchestrator | 2025-06-02 01:01:27 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:27.494664 | orchestrator | 2025-06-02 01:01:27 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:27.498785 | orchestrator | 2025-06-02 01:01:27 | INFO  | Task 4a719fca-d474-43a3-9fd8-882e6bb012ee is in state SUCCESS 2025-06-02 01:01:27.501504 | orchestrator | 2025-06-02 01:01:27 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:27.501561 | orchestrator | 2025-06-02 01:01:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:30.544108 | orchestrator | 2025-06-02 01:01:30 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:30.546323 | orchestrator | 2025-06-02 01:01:30 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:30.548596 | orchestrator | 2025-06-02 01:01:30 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:01:30.550154 | orchestrator | 2025-06-02 01:01:30 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:30.550247 | orchestrator | 2025-06-02 01:01:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:33.596339 | orchestrator | 2025-06-02 01:01:33 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:33.596527 | orchestrator | 2025-06-02 01:01:33 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:33.597507 | orchestrator | 2025-06-02 01:01:33 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:01:33.598156 | orchestrator | 2025-06-02 01:01:33 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:33.599014 | orchestrator | 2025-06-02 01:01:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:36.628660 | orchestrator | 2025-06-02 01:01:36 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:36.629031 | orchestrator | 2025-06-02 01:01:36 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:36.629827 | orchestrator | 2025-06-02 01:01:36 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:01:36.630628 | orchestrator | 2025-06-02 01:01:36 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:36.630655 | orchestrator | 2025-06-02 01:01:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:39.654681 | orchestrator | 2025-06-02 01:01:39 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:39.654978 | orchestrator | 2025-06-02 01:01:39 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:39.656184 | orchestrator | 2025-06-02 01:01:39 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:01:39.657191 | orchestrator | 2025-06-02 01:01:39 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:39.657266 | orchestrator | 2025-06-02 01:01:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:42.687818 | orchestrator | 2025-06-02 01:01:42 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:42.689015 | orchestrator | 2025-06-02 01:01:42 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:42.689049 | orchestrator | 2025-06-02 01:01:42 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:01:42.689813 | orchestrator | 2025-06-02 01:01:42 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:42.689836 | orchestrator | 2025-06-02 01:01:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:45.734433 | orchestrator | 2025-06-02 01:01:45 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:45.735717 | orchestrator | 2025-06-02 01:01:45 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:45.737367 | orchestrator | 2025-06-02 01:01:45 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:01:45.739145 | orchestrator | 2025-06-02 01:01:45 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:45.739401 | orchestrator | 2025-06-02 01:01:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:48.791224 | orchestrator | 2025-06-02 01:01:48 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:48.793233 | orchestrator | 2025-06-02 01:01:48 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:48.794605 | orchestrator | 2025-06-02 01:01:48 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:01:48.796377 | orchestrator | 2025-06-02 01:01:48 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:48.796429 | orchestrator | 2025-06-02 01:01:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:51.854800 | orchestrator | 2025-06-02 01:01:51 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:51.857029 | orchestrator | 2025-06-02 01:01:51 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:51.857987 | orchestrator | 2025-06-02 01:01:51 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:01:51.859812 | orchestrator | 2025-06-02 01:01:51 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:51.859987 | orchestrator | 2025-06-02 01:01:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:54.908062 | orchestrator | 2025-06-02 01:01:54 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:54.911261 | orchestrator | 2025-06-02 01:01:54 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:54.912871 | orchestrator | 2025-06-02 01:01:54 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:01:54.914127 | orchestrator | 2025-06-02 01:01:54 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:54.914160 | orchestrator | 2025-06-02 01:01:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:01:57.963491 | orchestrator | 2025-06-02 01:01:57 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:01:57.964542 | orchestrator | 2025-06-02 01:01:57 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:01:57.966160 | orchestrator | 2025-06-02 01:01:57 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:01:57.968179 | orchestrator | 2025-06-02 01:01:57 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:01:57.968204 | orchestrator | 2025-06-02 01:01:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:00.995625 | orchestrator | 2025-06-02 01:02:00 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:00.996427 | orchestrator | 2025-06-02 01:02:00 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:02:00.998072 | orchestrator | 2025-06-02 01:02:00 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:02:00.999231 | orchestrator | 2025-06-02 01:02:00 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:02:00.999251 | orchestrator | 2025-06-02 01:02:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:04.033437 | orchestrator | 2025-06-02 01:02:04 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:04.033598 | orchestrator | 2025-06-02 01:02:04 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:02:04.034300 | orchestrator | 2025-06-02 01:02:04 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:02:04.034841 | orchestrator | 2025-06-02 01:02:04 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:02:04.034946 | orchestrator | 2025-06-02 01:02:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:07.080205 | orchestrator | 2025-06-02 01:02:07 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:07.080847 | orchestrator | 2025-06-02 01:02:07 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:02:07.081345 | orchestrator | 2025-06-02 01:02:07 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:02:07.084063 | orchestrator | 2025-06-02 01:02:07 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:02:07.084256 | orchestrator | 2025-06-02 01:02:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:10.139929 | orchestrator | 2025-06-02 01:02:10 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:10.140059 | orchestrator | 2025-06-02 01:02:10 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:02:10.140145 | orchestrator | 2025-06-02 01:02:10 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:02:10.140704 | orchestrator | 2025-06-02 01:02:10 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:02:10.140729 | orchestrator | 2025-06-02 01:02:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:13.165918 | orchestrator | 2025-06-02 01:02:13 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:13.166176 | orchestrator | 2025-06-02 01:02:13 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:02:13.166979 | orchestrator | 2025-06-02 01:02:13 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:02:13.167953 | orchestrator | 2025-06-02 01:02:13 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:02:13.167976 | orchestrator | 2025-06-02 01:02:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:16.203971 | orchestrator | 2025-06-02 01:02:16 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:16.205886 | orchestrator | 2025-06-02 01:02:16 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:02:16.208570 | orchestrator | 2025-06-02 01:02:16 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:02:16.210001 | orchestrator | 2025-06-02 01:02:16 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:02:16.210365 | orchestrator | 2025-06-02 01:02:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:19.247516 | orchestrator | 2025-06-02 01:02:19 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:19.248683 | orchestrator | 2025-06-02 01:02:19 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:02:19.250219 | orchestrator | 2025-06-02 01:02:19 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:02:19.251559 | orchestrator | 2025-06-02 01:02:19 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:02:19.251626 | orchestrator | 2025-06-02 01:02:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:22.291237 | orchestrator | 2025-06-02 01:02:22 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:22.292593 | orchestrator | 2025-06-02 01:02:22 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:02:22.295106 | orchestrator | 2025-06-02 01:02:22 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:02:22.296773 | orchestrator | 2025-06-02 01:02:22 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:02:22.296885 | orchestrator | 2025-06-02 01:02:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:25.340488 | orchestrator | 2025-06-02 01:02:25 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:25.341402 | orchestrator | 2025-06-02 01:02:25 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state STARTED 2025-06-02 01:02:25.344081 | orchestrator | 2025-06-02 01:02:25 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:02:25.345553 | orchestrator | 2025-06-02 01:02:25 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:02:25.345604 | orchestrator | 2025-06-02 01:02:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:28.398261 | orchestrator | 2025-06-02 01:02:28 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:28.402395 | orchestrator | 2025-06-02 01:02:28 | INFO  | Task a28726f0-1c4e-49c8-8f30-a1d52be9727d is in state SUCCESS 2025-06-02 01:02:28.403836 | orchestrator | 2025-06-02 01:02:28.403876 | orchestrator | 2025-06-02 01:02:28.403915 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-02 01:02:28.403967 | orchestrator | 2025-06-02 01:02:28.404040 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-02 01:02:28.404054 | orchestrator | Monday 02 June 2025 01:00:49 +0000 (0:00:00.091) 0:00:00.092 *********** 2025-06-02 01:02:28.404065 | orchestrator | changed: [localhost] 2025-06-02 01:02:28.404079 | orchestrator | 2025-06-02 01:02:28.404090 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-02 01:02:28.404101 | orchestrator | Monday 02 June 2025 01:00:50 +0000 (0:00:00.840) 0:00:00.933 *********** 2025-06-02 01:02:28.404112 | orchestrator | changed: [localhost] 2025-06-02 01:02:28.404123 | orchestrator | 2025-06-02 01:02:28.404134 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-02 01:02:28.404145 | orchestrator | Monday 02 June 2025 01:01:20 +0000 (0:00:30.196) 0:00:31.130 *********** 2025-06-02 01:02:28.404156 | orchestrator | changed: [localhost] 2025-06-02 01:02:28.404167 | orchestrator | 2025-06-02 01:02:28.404178 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 01:02:28.404189 | orchestrator | 2025-06-02 01:02:28.404200 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 01:02:28.404211 | orchestrator | Monday 02 June 2025 01:01:25 +0000 (0:00:04.901) 0:00:36.031 *********** 2025-06-02 01:02:28.404222 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:02:28.404233 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:02:28.404244 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:02:28.404255 | orchestrator | 2025-06-02 01:02:28.404266 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 01:02:28.404276 | orchestrator | Monday 02 June 2025 01:01:25 +0000 (0:00:00.425) 0:00:36.457 *********** 2025-06-02 01:02:28.404334 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-02 01:02:28.404348 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-02 01:02:28.404359 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-02 01:02:28.404370 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-02 01:02:28.404381 | orchestrator | 2025-06-02 01:02:28.404392 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-02 01:02:28.404403 | orchestrator | skipping: no hosts matched 2025-06-02 01:02:28.404415 | orchestrator | 2025-06-02 01:02:28.404428 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:02:28.404441 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 01:02:28.404456 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 01:02:28.404471 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 01:02:28.404485 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 01:02:28.404498 | orchestrator | 2025-06-02 01:02:28.404512 | orchestrator | 2025-06-02 01:02:28.404526 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:02:28.404561 | orchestrator | Monday 02 June 2025 01:01:26 +0000 (0:00:00.791) 0:00:37.248 *********** 2025-06-02 01:02:28.404584 | orchestrator | =============================================================================== 2025-06-02 01:02:28.404611 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 30.20s 2025-06-02 01:02:28.404625 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.90s 2025-06-02 01:02:28.404638 | orchestrator | Ensure the destination directory exists --------------------------------- 0.84s 2025-06-02 01:02:28.404662 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-06-02 01:02:28.404702 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-06-02 01:02:28.404726 | orchestrator | 2025-06-02 01:02:28.404739 | orchestrator | 2025-06-02 01:02:28.404753 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 01:02:28.404766 | orchestrator | 2025-06-02 01:02:28.404777 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 01:02:28.404788 | orchestrator | Monday 02 June 2025 00:59:40 +0000 (0:00:00.393) 0:00:00.394 *********** 2025-06-02 01:02:28.404799 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:02:28.404810 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:02:28.404822 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:02:28.404832 | orchestrator | 2025-06-02 01:02:28.404843 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 01:02:28.404854 | orchestrator | Monday 02 June 2025 00:59:40 +0000 (0:00:00.300) 0:00:00.694 *********** 2025-06-02 01:02:28.404865 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-02 01:02:28.404876 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-02 01:02:28.404887 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-02 01:02:28.404899 | orchestrator | 2025-06-02 01:02:28.404910 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-02 01:02:28.404921 | orchestrator | 2025-06-02 01:02:28.404932 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 01:02:28.404964 | orchestrator | Monday 02 June 2025 00:59:41 +0000 (0:00:00.294) 0:00:00.989 *********** 2025-06-02 01:02:28.404976 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:02:28.405029 | orchestrator | 2025-06-02 01:02:28.405041 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-02 01:02:28.405052 | orchestrator | Monday 02 June 2025 00:59:41 +0000 (0:00:00.437) 0:00:01.427 *********** 2025-06-02 01:02:28.405077 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-02 01:02:28.405089 | orchestrator | 2025-06-02 01:02:28.405100 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-02 01:02:28.405111 | orchestrator | Monday 02 June 2025 00:59:44 +0000 (0:00:03.245) 0:00:04.673 *********** 2025-06-02 01:02:28.405122 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-02 01:02:28.405133 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-02 01:02:28.405144 | orchestrator | 2025-06-02 01:02:28.405155 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-02 01:02:28.405166 | orchestrator | Monday 02 June 2025 00:59:51 +0000 (0:00:06.481) 0:00:11.154 *********** 2025-06-02 01:02:28.405177 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 01:02:28.405188 | orchestrator | 2025-06-02 01:02:28.405241 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-02 01:02:28.405253 | orchestrator | Monday 02 June 2025 00:59:54 +0000 (0:00:03.196) 0:00:14.350 *********** 2025-06-02 01:02:28.405264 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 01:02:28.405274 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-02 01:02:28.405285 | orchestrator | 2025-06-02 01:02:28.405343 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-02 01:02:28.405363 | orchestrator | Monday 02 June 2025 00:59:58 +0000 (0:00:03.960) 0:00:18.311 *********** 2025-06-02 01:02:28.405382 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 01:02:28.405393 | orchestrator | 2025-06-02 01:02:28.405404 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-02 01:02:28.405415 | orchestrator | Monday 02 June 2025 01:00:02 +0000 (0:00:03.832) 0:00:22.143 *********** 2025-06-02 01:02:28.405426 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-02 01:02:28.405436 | orchestrator | 2025-06-02 01:02:28.405447 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-02 01:02:28.405467 | orchestrator | Monday 02 June 2025 01:00:06 +0000 (0:00:04.330) 0:00:26.474 *********** 2025-06-02 01:02:28.405481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.405499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.405527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.405540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.405760 | orchestrator | 2025-06-02 01:02:28.405771 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-02 01:02:28.405783 | orchestrator | Monday 02 June 2025 01:00:10 +0000 (0:00:03.394) 0:00:29.869 *********** 2025-06-02 01:02:28.405794 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:28.405805 | orchestrator | 2025-06-02 01:02:28.405816 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-02 01:02:28.405827 | orchestrator | Monday 02 June 2025 01:00:10 +0000 (0:00:00.347) 0:00:30.217 *********** 2025-06-02 01:02:28.405837 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:28.405849 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:28.405860 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:28.405871 | orchestrator | 2025-06-02 01:02:28.405882 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 01:02:28.405892 | orchestrator | Monday 02 June 2025 01:00:10 +0000 (0:00:00.621) 0:00:30.838 *********** 2025-06-02 01:02:28.405903 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:02:28.405914 | orchestrator | 2025-06-02 01:02:28.405925 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-02 01:02:28.405936 | orchestrator | Monday 02 June 2025 01:00:12 +0000 (0:00:01.112) 0:00:31.951 *********** 2025-06-02 01:02:28.405947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.405964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.405983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.406002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.406276 | orchestrator | 2025-06-02 01:02:28.406313 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-02 01:02:28.406326 | orchestrator | Monday 02 June 2025 01:00:18 +0000 (0:00:06.613) 0:00:38.565 *********** 2025-06-02 01:02:28.406338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.406350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 01:02:28.406837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.406914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.406929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.406941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.406954 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:28.406970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.406982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 01:02:28.407000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407071 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:28.407083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.407095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 01:02:28.407112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407172 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:28.407184 | orchestrator | 2025-06-02 01:02:28.407196 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-02 01:02:28.407208 | orchestrator | Monday 02 June 2025 01:00:20 +0000 (0:00:01.520) 0:00:40.085 *********** 2025-06-02 01:02:28.407220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.407232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 01:02:28.407256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407339 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:28.407351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.407363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 01:02:28.407391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407469 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:28.407483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.407497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 01:02:28.407523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.407594 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:28.407608 | orchestrator | 2025-06-02 01:02:28.407622 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-02 01:02:28.407635 | orchestrator | Monday 02 June 2025 01:00:22 +0000 (0:00:02.750) 0:00:42.836 *********** 2025-06-02 01:02:28.407648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.407669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.407697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.407712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.407934 | orchestrator | 2025-06-02 01:02:28.407945 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-02 01:02:28.407956 | orchestrator | Monday 02 June 2025 01:00:29 +0000 (0:00:06.672) 0:00:49.508 *********** 2025-06-02 01:02:28.407967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.407985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.408008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.408020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408227 | orchestrator | 2025-06-02 01:02:28.408246 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-02 01:02:28.408257 | orchestrator | Monday 02 June 2025 01:00:49 +0000 (0:00:20.075) 0:01:09.583 *********** 2025-06-02 01:02:28.408268 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 01:02:28.408279 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 01:02:28.408334 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 01:02:28.408348 | orchestrator | 2025-06-02 01:02:28.408359 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-02 01:02:28.408369 | orchestrator | Monday 02 June 2025 01:00:53 +0000 (0:00:04.170) 0:01:13.754 *********** 2025-06-02 01:02:28.408380 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 01:02:28.408391 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 01:02:28.408402 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 01:02:28.408413 | orchestrator | 2025-06-02 01:02:28.408423 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-02 01:02:28.408434 | orchestrator | Monday 02 June 2025 01:00:56 +0000 (0:00:02.478) 0:01:16.233 *********** 2025-06-02 01:02:28.408446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.408469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.408482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.408501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408713 | orchestrator | 2025-06-02 01:02:28.408724 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-02 01:02:28.408735 | orchestrator | Monday 02 June 2025 01:00:58 +0000 (0:00:02.429) 0:01:18.662 *********** 2025-06-02 01:02:28.408747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.408763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.408782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.408800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.408973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.408996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409008 | orchestrator | 2025-06-02 01:02:28.409019 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 01:02:28.409030 | orchestrator | Monday 02 June 2025 01:01:01 +0000 (0:00:02.628) 0:01:21.290 *********** 2025-06-02 01:02:28.409041 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:28.409053 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:28.409064 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:28.409075 | orchestrator | 2025-06-02 01:02:28.409086 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-02 01:02:28.409097 | orchestrator | Monday 02 June 2025 01:01:01 +0000 (0:00:00.492) 0:01:21.783 *********** 2025-06-02 01:02:28.409108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.409124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 01:02:28.409148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.409161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.409172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.409183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.409195 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:28.409206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.409222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 01:02:28.409248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.409260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.409272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.409283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.409349 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:28.409362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 01:02:28.409373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 01:02:28.409405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.409417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.409429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.409441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:02:28.409452 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:28.409463 | orchestrator | 2025-06-02 01:02:28.409475 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-02 01:02:28.409486 | orchestrator | Monday 02 June 2025 01:01:03 +0000 (0:00:01.223) 0:01:23.007 *********** 2025-06-02 01:02:28.409497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.409522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.409667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 01:02:28.409684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:02:28.409896 | orchestrator | 2025-06-02 01:02:28.409907 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 01:02:28.409918 | orchestrator | Monday 02 June 2025 01:01:07 +0000 (0:00:04.813) 0:01:27.822 *********** 2025-06-02 01:02:28.409929 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:28.409941 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:28.409952 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:28.409969 | orchestrator | 2025-06-02 01:02:28.409980 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-02 01:02:28.409991 | orchestrator | Monday 02 June 2025 01:01:08 +0000 (0:00:00.616) 0:01:28.438 *********** 2025-06-02 01:02:28.410002 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-02 01:02:28.410013 | orchestrator | 2025-06-02 01:02:28.410083 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-02 01:02:28.410102 | orchestrator | Monday 02 June 2025 01:01:11 +0000 (0:00:02.753) 0:01:31.192 *********** 2025-06-02 01:02:28.410122 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 01:02:28.410140 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-02 01:02:28.410151 | orchestrator | 2025-06-02 01:02:28.410163 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-02 01:02:28.410174 | orchestrator | Monday 02 June 2025 01:01:13 +0000 (0:00:02.216) 0:01:33.409 *********** 2025-06-02 01:02:28.410185 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:28.410196 | orchestrator | 2025-06-02 01:02:28.410207 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 01:02:28.410218 | orchestrator | Monday 02 June 2025 01:01:30 +0000 (0:00:17.137) 0:01:50.546 *********** 2025-06-02 01:02:28.410228 | orchestrator | 2025-06-02 01:02:28.410239 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 01:02:28.410250 | orchestrator | Monday 02 June 2025 01:01:30 +0000 (0:00:00.120) 0:01:50.666 *********** 2025-06-02 01:02:28.410261 | orchestrator | 2025-06-02 01:02:28.410272 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 01:02:28.410283 | orchestrator | Monday 02 June 2025 01:01:30 +0000 (0:00:00.072) 0:01:50.739 *********** 2025-06-02 01:02:28.410320 | orchestrator | 2025-06-02 01:02:28.410332 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-02 01:02:28.410343 | orchestrator | Monday 02 June 2025 01:01:30 +0000 (0:00:00.073) 0:01:50.813 *********** 2025-06-02 01:02:28.410354 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:28.410365 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:02:28.410376 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:02:28.410387 | orchestrator | 2025-06-02 01:02:28.410398 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-02 01:02:28.410414 | orchestrator | Monday 02 June 2025 01:01:38 +0000 (0:00:07.554) 0:01:58.367 *********** 2025-06-02 01:02:28.410425 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:02:28.410436 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:02:28.410448 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:28.410458 | orchestrator | 2025-06-02 01:02:28.410470 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-02 01:02:28.410488 | orchestrator | Monday 02 June 2025 01:01:48 +0000 (0:00:09.983) 0:02:08.350 *********** 2025-06-02 01:02:28.410500 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:28.410511 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:02:28.410522 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:02:28.410533 | orchestrator | 2025-06-02 01:02:28.410544 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-02 01:02:28.410555 | orchestrator | Monday 02 June 2025 01:01:58 +0000 (0:00:10.325) 0:02:18.676 *********** 2025-06-02 01:02:28.410566 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:02:28.410577 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:02:28.410588 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:28.410599 | orchestrator | 2025-06-02 01:02:28.410610 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-02 01:02:28.410621 | orchestrator | Monday 02 June 2025 01:02:07 +0000 (0:00:08.731) 0:02:27.407 *********** 2025-06-02 01:02:28.410632 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:28.410643 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:02:28.410654 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:02:28.410665 | orchestrator | 2025-06-02 01:02:28.410684 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-02 01:02:28.410695 | orchestrator | Monday 02 June 2025 01:02:14 +0000 (0:00:06.909) 0:02:34.316 *********** 2025-06-02 01:02:28.410706 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:28.410717 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:02:28.410728 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:02:28.410739 | orchestrator | 2025-06-02 01:02:28.410750 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-02 01:02:28.410761 | orchestrator | Monday 02 June 2025 01:02:19 +0000 (0:00:05.294) 0:02:39.611 *********** 2025-06-02 01:02:28.410772 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:28.410783 | orchestrator | 2025-06-02 01:02:28.410794 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:02:28.410805 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 01:02:28.410818 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 01:02:28.410829 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 01:02:28.410840 | orchestrator | 2025-06-02 01:02:28.410851 | orchestrator | 2025-06-02 01:02:28.410862 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:02:28.410879 | orchestrator | Monday 02 June 2025 01:02:27 +0000 (0:00:07.336) 0:02:46.947 *********** 2025-06-02 01:02:28.410897 | orchestrator | =============================================================================== 2025-06-02 01:02:28.410915 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.08s 2025-06-02 01:02:28.410933 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.14s 2025-06-02 01:02:28.410949 | orchestrator | designate : Restart designate-central container ------------------------ 10.33s 2025-06-02 01:02:28.410960 | orchestrator | designate : Restart designate-api container ----------------------------- 9.98s 2025-06-02 01:02:28.410971 | orchestrator | designate : Restart designate-producer container ------------------------ 8.73s 2025-06-02 01:02:28.410982 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 7.55s 2025-06-02 01:02:28.410993 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.34s 2025-06-02 01:02:28.411004 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.91s 2025-06-02 01:02:28.411014 | orchestrator | designate : Copying over config.json files for services ----------------- 6.67s 2025-06-02 01:02:28.411025 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.61s 2025-06-02 01:02:28.411036 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.48s 2025-06-02 01:02:28.411047 | orchestrator | designate : Restart designate-worker container -------------------------- 5.29s 2025-06-02 01:02:28.411058 | orchestrator | designate : Check designate containers ---------------------------------- 4.81s 2025-06-02 01:02:28.411069 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.33s 2025-06-02 01:02:28.411080 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.17s 2025-06-02 01:02:28.411090 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.96s 2025-06-02 01:02:28.411101 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.83s 2025-06-02 01:02:28.411112 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.39s 2025-06-02 01:02:28.411123 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.25s 2025-06-02 01:02:28.411134 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.20s 2025-06-02 01:02:28.411145 | orchestrator | 2025-06-02 01:02:28 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:02:28.411170 | orchestrator | 2025-06-02 01:02:28 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state STARTED 2025-06-02 01:02:28.411182 | orchestrator | 2025-06-02 01:02:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:31.459966 | orchestrator | 2025-06-02 01:02:31 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:02:31.461676 | orchestrator | 2025-06-02 01:02:31 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:31.463515 | orchestrator | 2025-06-02 01:02:31 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:02:31.465194 | orchestrator | 2025-06-02 01:02:31 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:02:31.468839 | orchestrator | 2025-06-02 01:02:31 | INFO  | Task 3969b186-dec7-4946-aa94-3132ebf95bde is in state SUCCESS 2025-06-02 01:02:31.472136 | orchestrator | 2025-06-02 01:02:31.472182 | orchestrator | 2025-06-02 01:02:31.472194 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 01:02:31.472207 | orchestrator | 2025-06-02 01:02:31.472218 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 01:02:31.472230 | orchestrator | Monday 02 June 2025 00:58:25 +0000 (0:00:00.280) 0:00:00.280 *********** 2025-06-02 01:02:31.472243 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:02:31.472257 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:02:31.472269 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:02:31.472281 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:02:31.472352 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:02:31.472364 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:02:31.472376 | orchestrator | 2025-06-02 01:02:31.472387 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 01:02:31.472399 | orchestrator | Monday 02 June 2025 00:58:26 +0000 (0:00:00.674) 0:00:00.954 *********** 2025-06-02 01:02:31.472410 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-02 01:02:31.472422 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-02 01:02:31.472433 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-02 01:02:31.472444 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-02 01:02:31.472455 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-02 01:02:31.472466 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-02 01:02:31.472477 | orchestrator | 2025-06-02 01:02:31.472489 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-02 01:02:31.472500 | orchestrator | 2025-06-02 01:02:31.472511 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 01:02:31.472522 | orchestrator | Monday 02 June 2025 00:58:27 +0000 (0:00:00.592) 0:00:01.547 *********** 2025-06-02 01:02:31.472534 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 01:02:31.472547 | orchestrator | 2025-06-02 01:02:31.472558 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-02 01:02:31.472569 | orchestrator | Monday 02 June 2025 00:58:28 +0000 (0:00:01.183) 0:00:02.731 *********** 2025-06-02 01:02:31.472580 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:02:31.472592 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:02:31.472603 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:02:31.472614 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:02:31.472625 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:02:31.472636 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:02:31.472647 | orchestrator | 2025-06-02 01:02:31.472659 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-02 01:02:31.472670 | orchestrator | Monday 02 June 2025 00:58:29 +0000 (0:00:01.298) 0:00:04.029 *********** 2025-06-02 01:02:31.472704 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:02:31.472719 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:02:31.472731 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:02:31.472744 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:02:31.472758 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:02:31.472771 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:02:31.472784 | orchestrator | 2025-06-02 01:02:31.472798 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-02 01:02:31.472810 | orchestrator | Monday 02 June 2025 00:58:30 +0000 (0:00:01.060) 0:00:05.090 *********** 2025-06-02 01:02:31.472824 | orchestrator | ok: [testbed-node-0] => { 2025-06-02 01:02:31.472837 | orchestrator |  "changed": false, 2025-06-02 01:02:31.472851 | orchestrator |  "msg": "All assertions passed" 2025-06-02 01:02:31.472863 | orchestrator | } 2025-06-02 01:02:31.472877 | orchestrator | ok: [testbed-node-1] => { 2025-06-02 01:02:31.472890 | orchestrator |  "changed": false, 2025-06-02 01:02:31.472903 | orchestrator |  "msg": "All assertions passed" 2025-06-02 01:02:31.472916 | orchestrator | } 2025-06-02 01:02:31.472930 | orchestrator | ok: [testbed-node-2] => { 2025-06-02 01:02:31.472941 | orchestrator |  "changed": false, 2025-06-02 01:02:31.472952 | orchestrator |  "msg": "All assertions passed" 2025-06-02 01:02:31.472963 | orchestrator | } 2025-06-02 01:02:31.472974 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 01:02:31.472985 | orchestrator |  "changed": false, 2025-06-02 01:02:31.472996 | orchestrator |  "msg": "All assertions passed" 2025-06-02 01:02:31.473007 | orchestrator | } 2025-06-02 01:02:31.473018 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 01:02:31.473029 | orchestrator |  "changed": false, 2025-06-02 01:02:31.473040 | orchestrator |  "msg": "All assertions passed" 2025-06-02 01:02:31.473051 | orchestrator | } 2025-06-02 01:02:31.473062 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 01:02:31.473073 | orchestrator |  "changed": false, 2025-06-02 01:02:31.473084 | orchestrator |  "msg": "All assertions passed" 2025-06-02 01:02:31.473095 | orchestrator | } 2025-06-02 01:02:31.473106 | orchestrator | 2025-06-02 01:02:31.473117 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-02 01:02:31.473129 | orchestrator | Monday 02 June 2025 00:58:31 +0000 (0:00:00.764) 0:00:05.855 *********** 2025-06-02 01:02:31.473139 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.473150 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.473161 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.473172 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.473192 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.473203 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.473214 | orchestrator | 2025-06-02 01:02:31.473225 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-02 01:02:31.473236 | orchestrator | Monday 02 June 2025 00:58:32 +0000 (0:00:00.622) 0:00:06.477 *********** 2025-06-02 01:02:31.473247 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-02 01:02:31.473258 | orchestrator | 2025-06-02 01:02:31.473269 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-02 01:02:31.473280 | orchestrator | Monday 02 June 2025 00:58:35 +0000 (0:00:03.283) 0:00:09.761 *********** 2025-06-02 01:02:31.473323 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-02 01:02:31.473336 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-02 01:02:31.473347 | orchestrator | 2025-06-02 01:02:31.473372 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-02 01:02:31.473384 | orchestrator | Monday 02 June 2025 00:58:41 +0000 (0:00:06.312) 0:00:16.074 *********** 2025-06-02 01:02:31.473395 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 01:02:31.473406 | orchestrator | 2025-06-02 01:02:31.473416 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-02 01:02:31.473441 | orchestrator | Monday 02 June 2025 00:58:44 +0000 (0:00:03.036) 0:00:19.110 *********** 2025-06-02 01:02:31.473452 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 01:02:31.473463 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-02 01:02:31.473474 | orchestrator | 2025-06-02 01:02:31.473485 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-02 01:02:31.473496 | orchestrator | Monday 02 June 2025 00:58:48 +0000 (0:00:03.573) 0:00:22.684 *********** 2025-06-02 01:02:31.473507 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 01:02:31.473518 | orchestrator | 2025-06-02 01:02:31.473529 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-02 01:02:31.473540 | orchestrator | Monday 02 June 2025 00:58:51 +0000 (0:00:03.174) 0:00:25.859 *********** 2025-06-02 01:02:31.473550 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-02 01:02:31.473561 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-02 01:02:31.473572 | orchestrator | 2025-06-02 01:02:31.473583 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 01:02:31.473594 | orchestrator | Monday 02 June 2025 00:58:58 +0000 (0:00:07.375) 0:00:33.234 *********** 2025-06-02 01:02:31.473605 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.473616 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.473627 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.473637 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.473648 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.473659 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.473670 | orchestrator | 2025-06-02 01:02:31.473681 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-02 01:02:31.473692 | orchestrator | Monday 02 June 2025 00:58:59 +0000 (0:00:00.703) 0:00:33.938 *********** 2025-06-02 01:02:31.473702 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.473713 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.473724 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.473735 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.473746 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.473757 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.473768 | orchestrator | 2025-06-02 01:02:31.473779 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-02 01:02:31.473790 | orchestrator | Monday 02 June 2025 00:59:01 +0000 (0:00:01.906) 0:00:35.844 *********** 2025-06-02 01:02:31.473801 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:02:31.473812 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:02:31.473823 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:02:31.473834 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:02:31.473845 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:02:31.473856 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:02:31.473867 | orchestrator | 2025-06-02 01:02:31.473878 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 01:02:31.473889 | orchestrator | Monday 02 June 2025 00:59:02 +0000 (0:00:01.559) 0:00:37.404 *********** 2025-06-02 01:02:31.473900 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.473911 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.473922 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.473933 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.473944 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.473954 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.473965 | orchestrator | 2025-06-02 01:02:31.473976 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-02 01:02:31.473987 | orchestrator | Monday 02 June 2025 00:59:04 +0000 (0:00:01.942) 0:00:39.346 *********** 2025-06-02 01:02:31.474007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.474156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.474172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.474185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.474198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.474229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.474242 | orchestrator | 2025-06-02 01:02:31.474254 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-02 01:02:31.474265 | orchestrator | Monday 02 June 2025 00:59:07 +0000 (0:00:02.540) 0:00:41.887 *********** 2025-06-02 01:02:31.474276 | orchestrator | [WARNING]: Skipped 2025-06-02 01:02:31.474308 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-02 01:02:31.474320 | orchestrator | due to this access issue: 2025-06-02 01:02:31.474331 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-02 01:02:31.474342 | orchestrator | a directory 2025-06-02 01:02:31.474354 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 01:02:31.474365 | orchestrator | 2025-06-02 01:02:31.474390 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 01:02:31.474402 | orchestrator | Monday 02 June 2025 00:59:08 +0000 (0:00:00.762) 0:00:42.650 *********** 2025-06-02 01:02:31.474413 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 01:02:31.474426 | orchestrator | 2025-06-02 01:02:31.474437 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-02 01:02:31.474448 | orchestrator | Monday 02 June 2025 00:59:09 +0000 (0:00:01.064) 0:00:43.714 *********** 2025-06-02 01:02:31.474459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.474472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.474491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.474508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.474529 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.474541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.474552 | orchestrator | 2025-06-02 01:02:31.474563 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-02 01:02:31.474574 | orchestrator | Monday 02 June 2025 00:59:12 +0000 (0:00:02.850) 0:00:46.565 *********** 2025-06-02 01:02:31.474586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.474604 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.474621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.474633 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.474651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.474662 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.474674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.474686 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.474697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.474715 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.474726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.474738 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.474749 | orchestrator | 2025-06-02 01:02:31.474760 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-02 01:02:31.474771 | orchestrator | Monday 02 June 2025 00:59:14 +0000 (0:00:02.144) 0:00:48.709 *********** 2025-06-02 01:02:31.474787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.474799 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.474818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.474830 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.474841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.474859 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.474871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.474882 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.474893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.474905 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.474920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.474932 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.474943 | orchestrator | 2025-06-02 01:02:31.474955 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-02 01:02:31.474971 | orchestrator | Monday 02 June 2025 00:59:17 +0000 (0:00:03.579) 0:00:52.288 *********** 2025-06-02 01:02:31.474983 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.474994 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.475005 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.475016 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.475027 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.475038 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.475049 | orchestrator | 2025-06-02 01:02:31.475060 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-02 01:02:31.475071 | orchestrator | Monday 02 June 2025 00:59:20 +0000 (0:00:02.376) 0:00:54.665 *********** 2025-06-02 01:02:31.475081 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.475092 | orchestrator | 2025-06-02 01:02:31.475103 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-02 01:02:31.475114 | orchestrator | Monday 02 June 2025 00:59:20 +0000 (0:00:00.170) 0:00:54.835 *********** 2025-06-02 01:02:31.475125 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.475142 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.475153 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.475163 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.475174 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.475185 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.475196 | orchestrator | 2025-06-02 01:02:31.475206 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-02 01:02:31.475217 | orchestrator | Monday 02 June 2025 00:59:22 +0000 (0:00:01.712) 0:00:56.548 *********** 2025-06-02 01:02:31.475229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.475241 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.475252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.475263 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.475279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.475318 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.475337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.475360 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.475371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.475382 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.475393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.475405 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.475416 | orchestrator | 2025-06-02 01:02:31.475427 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-02 01:02:31.475438 | orchestrator | Monday 02 June 2025 00:59:25 +0000 (0:00:03.503) 0:01:00.051 *********** 2025-06-02 01:02:31.475449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.475472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.475491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.475503 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.475515 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.475526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.475537 | orchestrator | 2025-06-02 01:02:31.475548 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-02 01:02:31.475564 | orchestrator | Monday 02 June 2025 00:59:28 +0000 (0:00:03.327) 0:01:03.378 *********** 2025-06-02 01:02:31.475581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.475599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.475611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.475622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.475639 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.475657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.475675 | orchestrator | 2025-06-02 01:02:31.475686 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-02 01:02:31.475697 | orchestrator | Monday 02 June 2025 00:59:34 +0000 (0:00:05.735) 0:01:09.114 *********** 2025-06-02 01:02:31.475708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.475720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.475731 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.475743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.475754 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.475770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.475788 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.475807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.475819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.475830 | orchestrator | 2025-06-02 01:02:31.475842 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-02 01:02:31.475853 | orchestrator | Monday 02 June 2025 00:59:38 +0000 (0:00:03.951) 0:01:13.066 *********** 2025-06-02 01:02:31.475864 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.475875 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.475886 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.475897 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:02:31.475908 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:31.475918 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:02:31.475929 | orchestrator | 2025-06-02 01:02:31.475940 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-02 01:02:31.475951 | orchestrator | Monday 02 June 2025 00:59:41 +0000 (0:00:02.820) 0:01:15.887 *********** 2025-06-02 01:02:31.475963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.475974 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.475990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.476007 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.476026 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.476038 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.476050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.476061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.476073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.476095 | orchestrator | 2025-06-02 01:02:31.476106 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-02 01:02:31.476117 | orchestrator | Monday 02 June 2025 00:59:44 +0000 (0:00:03.374) 0:01:19.261 *********** 2025-06-02 01:02:31.476128 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.476139 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.476150 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.476160 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.476171 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.476187 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.476198 | orchestrator | 2025-06-02 01:02:31.476209 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-02 01:02:31.476220 | orchestrator | Monday 02 June 2025 00:59:47 +0000 (0:00:02.247) 0:01:21.509 *********** 2025-06-02 01:02:31.476231 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.476241 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.476252 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.476263 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.476274 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.476301 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.476313 | orchestrator | 2025-06-02 01:02:31.476324 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-02 01:02:31.476335 | orchestrator | Monday 02 June 2025 00:59:49 +0000 (0:00:02.216) 0:01:23.726 *********** 2025-06-02 01:02:31.476346 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.476358 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.476369 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.476385 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.476397 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.476408 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.476419 | orchestrator | 2025-06-02 01:02:31.476430 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-02 01:02:31.476441 | orchestrator | Monday 02 June 2025 00:59:51 +0000 (0:00:02.196) 0:01:25.922 *********** 2025-06-02 01:02:31.476452 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.476463 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.476474 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.476485 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.476496 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.476506 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.476517 | orchestrator | 2025-06-02 01:02:31.476528 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-02 01:02:31.476539 | orchestrator | Monday 02 June 2025 00:59:53 +0000 (0:00:01.948) 0:01:27.871 *********** 2025-06-02 01:02:31.476550 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.476561 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.476572 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.476583 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.476594 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.476605 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.476616 | orchestrator | 2025-06-02 01:02:31.476627 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-02 01:02:31.476638 | orchestrator | Monday 02 June 2025 00:59:55 +0000 (0:00:02.422) 0:01:30.294 *********** 2025-06-02 01:02:31.476648 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.476659 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.476670 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.476681 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.476700 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.476711 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.476722 | orchestrator | 2025-06-02 01:02:31.476733 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-02 01:02:31.476745 | orchestrator | Monday 02 June 2025 00:59:58 +0000 (0:00:02.307) 0:01:32.601 *********** 2025-06-02 01:02:31.476756 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 01:02:31.476766 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.476777 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 01:02:31.476788 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.476800 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 01:02:31.476811 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.476822 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 01:02:31.476833 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.476844 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 01:02:31.476855 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.476866 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 01:02:31.476877 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.476888 | orchestrator | 2025-06-02 01:02:31.476899 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-02 01:02:31.476910 | orchestrator | Monday 02 June 2025 01:00:00 +0000 (0:00:02.239) 0:01:34.840 *********** 2025-06-02 01:02:31.476921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.476932 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.476953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.476966 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.476977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.476998 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.477010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.477022 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.477033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.477044 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.477060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.477072 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.477083 | orchestrator | 2025-06-02 01:02:31.477094 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-02 01:02:31.477105 | orchestrator | Monday 02 June 2025 01:00:02 +0000 (0:00:02.335) 0:01:37.176 *********** 2025-06-02 01:02:31.477122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.477140 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.477152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.477163 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.477174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.477186 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.477197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.477213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.477225 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.477242 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.477260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.477272 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.477299 | orchestrator | 2025-06-02 01:02:31.477311 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-02 01:02:31.477322 | orchestrator | Monday 02 June 2025 01:00:04 +0000 (0:00:02.010) 0:01:39.186 *********** 2025-06-02 01:02:31.477333 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.477344 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.477355 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.477366 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.477377 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.477388 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.477398 | orchestrator | 2025-06-02 01:02:31.477409 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-02 01:02:31.477420 | orchestrator | Monday 02 June 2025 01:00:07 +0000 (0:00:02.971) 0:01:42.157 *********** 2025-06-02 01:02:31.477431 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.477442 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.477453 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.477463 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:02:31.477474 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:02:31.477485 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:02:31.477496 | orchestrator | 2025-06-02 01:02:31.477507 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-02 01:02:31.477518 | orchestrator | Monday 02 June 2025 01:00:12 +0000 (0:00:05.255) 0:01:47.413 *********** 2025-06-02 01:02:31.477529 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.477539 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.477550 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.477561 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.477572 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.477583 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.477595 | orchestrator | 2025-06-02 01:02:31.477605 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-02 01:02:31.477616 | orchestrator | Monday 02 June 2025 01:00:15 +0000 (0:00:02.785) 0:01:50.198 *********** 2025-06-02 01:02:31.477627 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.477638 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.477648 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.477659 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.477670 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.477681 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.477692 | orchestrator | 2025-06-02 01:02:31.477703 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-02 01:02:31.477714 | orchestrator | Monday 02 June 2025 01:00:17 +0000 (0:00:02.115) 0:01:52.314 *********** 2025-06-02 01:02:31.477725 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.477735 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.477746 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.477757 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.477768 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.477785 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.477796 | orchestrator | 2025-06-02 01:02:31.477807 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-02 01:02:31.477818 | orchestrator | Monday 02 June 2025 01:00:20 +0000 (0:00:02.925) 0:01:55.239 *********** 2025-06-02 01:02:31.477829 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.477840 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.477851 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.477862 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.477872 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.477883 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.477894 | orchestrator | 2025-06-02 01:02:31.477905 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-02 01:02:31.477916 | orchestrator | Monday 02 June 2025 01:00:24 +0000 (0:00:03.465) 0:01:58.704 *********** 2025-06-02 01:02:31.477927 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.477938 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.477954 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.477965 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.477976 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.477987 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.477998 | orchestrator | 2025-06-02 01:02:31.478009 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-02 01:02:31.478047 | orchestrator | Monday 02 June 2025 01:00:27 +0000 (0:00:02.908) 0:02:01.613 *********** 2025-06-02 01:02:31.478061 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.478071 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.478083 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.478093 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.478104 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.478115 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.478126 | orchestrator | 2025-06-02 01:02:31.478137 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-02 01:02:31.478148 | orchestrator | Monday 02 June 2025 01:00:29 +0000 (0:00:02.063) 0:02:03.676 *********** 2025-06-02 01:02:31.478159 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.478177 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.478189 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.478200 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.478211 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.478222 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.478232 | orchestrator | 2025-06-02 01:02:31.478244 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-02 01:02:31.478255 | orchestrator | Monday 02 June 2025 01:00:32 +0000 (0:00:03.223) 0:02:06.899 *********** 2025-06-02 01:02:31.478266 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.478277 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.478338 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.478350 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.478361 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.478372 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.478383 | orchestrator | 2025-06-02 01:02:31.478394 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-02 01:02:31.478405 | orchestrator | Monday 02 June 2025 01:00:36 +0000 (0:00:03.997) 0:02:10.897 *********** 2025-06-02 01:02:31.478416 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 01:02:31.478427 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.478438 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 01:02:31.478449 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.478460 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 01:02:31.478479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.478490 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 01:02:31.478501 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.478512 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 01:02:31.478522 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.478532 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 01:02:31.478541 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.478551 | orchestrator | 2025-06-02 01:02:31.478561 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-02 01:02:31.478571 | orchestrator | Monday 02 June 2025 01:00:39 +0000 (0:00:02.791) 0:02:13.689 *********** 2025-06-02 01:02:31.478581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.478591 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.478606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.478617 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.478689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.478703 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.478713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.478729 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.478739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 01:02:31.478749 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.478759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 01:02:31.478770 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.478779 | orchestrator | 2025-06-02 01:02:31.478789 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-02 01:02:31.478799 | orchestrator | Monday 02 June 2025 01:00:41 +0000 (0:00:02.519) 0:02:16.209 *********** 2025-06-02 01:02:31.478814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.478830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.478847 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.478857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 01:02:31.478868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.478882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 01:02:31.478892 | orchestrator | 2025-06-02 01:02:31.478902 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 01:02:31.478917 | orchestrator | Monday 02 June 2025 01:00:45 +0000 (0:00:03.568) 0:02:19.777 *********** 2025-06-02 01:02:31.478927 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:31.478942 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:31.478952 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:31.478962 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:02:31.478972 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:02:31.478981 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:02:31.478991 | orchestrator | 2025-06-02 01:02:31.479001 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-02 01:02:31.479010 | orchestrator | Monday 02 June 2025 01:00:45 +0000 (0:00:00.472) 0:02:20.250 *********** 2025-06-02 01:02:31.479020 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:31.479029 | orchestrator | 2025-06-02 01:02:31.479039 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-02 01:02:31.479049 | orchestrator | Monday 02 June 2025 01:00:47 +0000 (0:00:02.152) 0:02:22.403 *********** 2025-06-02 01:02:31.479058 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:31.479068 | orchestrator | 2025-06-02 01:02:31.479078 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-02 01:02:31.479087 | orchestrator | Monday 02 June 2025 01:00:49 +0000 (0:00:01.935) 0:02:24.338 *********** 2025-06-02 01:02:31.479097 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:31.479107 | orchestrator | 2025-06-02 01:02:31.479117 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 01:02:31.479126 | orchestrator | Monday 02 June 2025 01:01:33 +0000 (0:00:43.482) 0:03:07.820 *********** 2025-06-02 01:02:31.479136 | orchestrator | 2025-06-02 01:02:31.479146 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 01:02:31.479155 | orchestrator | Monday 02 June 2025 01:01:33 +0000 (0:00:00.064) 0:03:07.884 *********** 2025-06-02 01:02:31.479165 | orchestrator | 2025-06-02 01:02:31.479175 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 01:02:31.479184 | orchestrator | Monday 02 June 2025 01:01:33 +0000 (0:00:00.217) 0:03:08.101 *********** 2025-06-02 01:02:31.479194 | orchestrator | 2025-06-02 01:02:31.479204 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 01:02:31.479213 | orchestrator | Monday 02 June 2025 01:01:33 +0000 (0:00:00.063) 0:03:08.165 *********** 2025-06-02 01:02:31.479223 | orchestrator | 2025-06-02 01:02:31.479233 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 01:02:31.479242 | orchestrator | Monday 02 June 2025 01:01:33 +0000 (0:00:00.062) 0:03:08.227 *********** 2025-06-02 01:02:31.479252 | orchestrator | 2025-06-02 01:02:31.479261 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 01:02:31.479271 | orchestrator | Monday 02 June 2025 01:01:33 +0000 (0:00:00.060) 0:03:08.288 *********** 2025-06-02 01:02:31.479281 | orchestrator | 2025-06-02 01:02:31.479336 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-02 01:02:31.479346 | orchestrator | Monday 02 June 2025 01:01:33 +0000 (0:00:00.060) 0:03:08.349 *********** 2025-06-02 01:02:31.479356 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:31.479366 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:02:31.479376 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:02:31.479386 | orchestrator | 2025-06-02 01:02:31.479396 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-02 01:02:31.479406 | orchestrator | Monday 02 June 2025 01:02:01 +0000 (0:00:27.208) 0:03:35.557 *********** 2025-06-02 01:02:31.479415 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:02:31.479425 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:02:31.479435 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:02:31.479445 | orchestrator | 2025-06-02 01:02:31.479455 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:02:31.479465 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 01:02:31.479476 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-02 01:02:31.479492 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-02 01:02:31.479502 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 01:02:31.479512 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 01:02:31.479530 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 01:02:31.479540 | orchestrator | 2025-06-02 01:02:31.479550 | orchestrator | 2025-06-02 01:02:31.479560 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:02:31.479569 | orchestrator | Monday 02 June 2025 01:02:28 +0000 (0:00:27.539) 0:04:03.096 *********** 2025-06-02 01:02:31.479579 | orchestrator | =============================================================================== 2025-06-02 01:02:31.479589 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.48s 2025-06-02 01:02:31.479599 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 27.54s 2025-06-02 01:02:31.479609 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.21s 2025-06-02 01:02:31.479618 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.38s 2025-06-02 01:02:31.479633 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.31s 2025-06-02 01:02:31.479643 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.74s 2025-06-02 01:02:31.479653 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.26s 2025-06-02 01:02:31.479663 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 4.00s 2025-06-02 01:02:31.479673 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.95s 2025-06-02 01:02:31.479682 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.58s 2025-06-02 01:02:31.479692 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.57s 2025-06-02 01:02:31.479702 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.57s 2025-06-02 01:02:31.479712 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.50s 2025-06-02 01:02:31.479721 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.47s 2025-06-02 01:02:31.479729 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.37s 2025-06-02 01:02:31.479737 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.33s 2025-06-02 01:02:31.479745 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.28s 2025-06-02 01:02:31.479753 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.22s 2025-06-02 01:02:31.479761 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.17s 2025-06-02 01:02:31.479769 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.04s 2025-06-02 01:02:31.479777 | orchestrator | 2025-06-02 01:02:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:34.505546 | orchestrator | 2025-06-02 01:02:34 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:02:34.506827 | orchestrator | 2025-06-02 01:02:34 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:34.507573 | orchestrator | 2025-06-02 01:02:34 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state STARTED 2025-06-02 01:02:34.510855 | orchestrator | 2025-06-02 01:02:34 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:02:34.510902 | orchestrator | 2025-06-02 01:02:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:37.540014 | orchestrator | 2025-06-02 01:02:37 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:02:37.543600 | orchestrator | 2025-06-02 01:02:37 | INFO  | Task ed7a0a99-eee4-42a6-bef8-37abfa453606 is in state STARTED 2025-06-02 01:02:37.545962 | orchestrator | 2025-06-02 01:02:37 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:37.548347 | orchestrator | 2025-06-02 01:02:37 | INFO  | Task 543a28f0-3860-4fc0-a8fd-9e8ed81df01b is in state SUCCESS 2025-06-02 01:02:37.550368 | orchestrator | 2025-06-02 01:02:37.550396 | orchestrator | 2025-06-02 01:02:37.550408 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 01:02:37.550419 | orchestrator | 2025-06-02 01:02:37.550430 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 01:02:37.550442 | orchestrator | Monday 02 June 2025 01:01:32 +0000 (0:00:00.358) 0:00:00.358 *********** 2025-06-02 01:02:37.550453 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:02:37.550468 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:02:37.550479 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:02:37.550490 | orchestrator | 2025-06-02 01:02:37.550502 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 01:02:37.550514 | orchestrator | Monday 02 June 2025 01:01:32 +0000 (0:00:00.317) 0:00:00.676 *********** 2025-06-02 01:02:37.550525 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-02 01:02:37.550537 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-02 01:02:37.550548 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-02 01:02:37.550559 | orchestrator | 2025-06-02 01:02:37.550569 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-02 01:02:37.550580 | orchestrator | 2025-06-02 01:02:37.550591 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 01:02:37.550602 | orchestrator | Monday 02 June 2025 01:01:33 +0000 (0:00:00.537) 0:00:01.214 *********** 2025-06-02 01:02:37.550629 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:02:37.550642 | orchestrator | 2025-06-02 01:02:37.550653 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-02 01:02:37.550663 | orchestrator | Monday 02 June 2025 01:01:33 +0000 (0:00:00.581) 0:00:01.795 *********** 2025-06-02 01:02:37.550674 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-02 01:02:37.550685 | orchestrator | 2025-06-02 01:02:37.550696 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-02 01:02:37.550707 | orchestrator | Monday 02 June 2025 01:01:37 +0000 (0:00:03.377) 0:00:05.173 *********** 2025-06-02 01:02:37.550717 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-02 01:02:37.550729 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-02 01:02:37.550740 | orchestrator | 2025-06-02 01:02:37.550751 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-02 01:02:37.550761 | orchestrator | Monday 02 June 2025 01:01:43 +0000 (0:00:06.287) 0:00:11.460 *********** 2025-06-02 01:02:37.550772 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 01:02:37.550783 | orchestrator | 2025-06-02 01:02:37.550794 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-02 01:02:37.550805 | orchestrator | Monday 02 June 2025 01:01:46 +0000 (0:00:03.086) 0:00:14.547 *********** 2025-06-02 01:02:37.550816 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 01:02:37.550827 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-02 01:02:37.550862 | orchestrator | 2025-06-02 01:02:37.550873 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-02 01:02:37.550954 | orchestrator | Monday 02 June 2025 01:01:50 +0000 (0:00:03.836) 0:00:18.383 *********** 2025-06-02 01:02:37.550968 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 01:02:37.550979 | orchestrator | 2025-06-02 01:02:37.550990 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-02 01:02:37.551001 | orchestrator | Monday 02 June 2025 01:01:53 +0000 (0:00:03.119) 0:00:21.503 *********** 2025-06-02 01:02:37.551012 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-02 01:02:37.551023 | orchestrator | 2025-06-02 01:02:37.551034 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 01:02:37.551044 | orchestrator | Monday 02 June 2025 01:01:57 +0000 (0:00:04.164) 0:00:25.668 *********** 2025-06-02 01:02:37.551056 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:37.551067 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:37.551079 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:37.551090 | orchestrator | 2025-06-02 01:02:37.551101 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-02 01:02:37.551111 | orchestrator | Monday 02 June 2025 01:01:57 +0000 (0:00:00.249) 0:00:25.917 *********** 2025-06-02 01:02:37.551126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.551154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.551173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.551194 | orchestrator | 2025-06-02 01:02:37.551206 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-02 01:02:37.551217 | orchestrator | Monday 02 June 2025 01:01:58 +0000 (0:00:00.721) 0:00:26.639 *********** 2025-06-02 01:02:37.551228 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:37.551240 | orchestrator | 2025-06-02 01:02:37.551251 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-02 01:02:37.551262 | orchestrator | Monday 02 June 2025 01:01:58 +0000 (0:00:00.114) 0:00:26.753 *********** 2025-06-02 01:02:37.551273 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:37.551302 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:37.551314 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:37.551326 | orchestrator | 2025-06-02 01:02:37.551337 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 01:02:37.551348 | orchestrator | Monday 02 June 2025 01:01:58 +0000 (0:00:00.350) 0:00:27.103 *********** 2025-06-02 01:02:37.551359 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:02:37.551370 | orchestrator | 2025-06-02 01:02:37.551380 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-02 01:02:37.551391 | orchestrator | Monday 02 June 2025 01:01:59 +0000 (0:00:00.530) 0:00:27.634 *********** 2025-06-02 01:02:37.551403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.551423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.551441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.551461 | orchestrator | 2025-06-02 01:02:37.551473 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-02 01:02:37.551484 | orchestrator | Monday 02 June 2025 01:02:01 +0000 (0:00:01.568) 0:00:29.202 *********** 2025-06-02 01:02:37.551495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 01:02:37.551507 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:37.551520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 01:02:37.551531 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:37.551549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 01:02:37.551562 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:37.551573 | orchestrator | 2025-06-02 01:02:37.551584 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-02 01:02:37.551598 | orchestrator | Monday 02 June 2025 01:02:01 +0000 (0:00:00.545) 0:00:29.748 *********** 2025-06-02 01:02:37.551616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 01:02:37.551637 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:37.551651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 01:02:37.551666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 01:02:37.551679 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:37.551693 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:37.551706 | orchestrator | 2025-06-02 01:02:37.551718 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-02 01:02:37.551733 | orchestrator | Monday 02 June 2025 01:02:02 +0000 (0:00:00.794) 0:00:30.543 *********** 2025-06-02 01:02:37.551754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.551774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.551794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.551808 | orchestrator | 2025-06-02 01:02:37.551820 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-02 01:02:37.551832 | orchestrator | Monday 02 June 2025 01:02:03 +0000 (0:00:01.401) 0:00:31.945 *********** 2025-06-02 01:02:37.551846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.551860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.551881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.551903 | orchestrator | 2025-06-02 01:02:37.551916 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-02 01:02:37.551933 | orchestrator | Monday 02 June 2025 01:02:06 +0000 (0:00:02.542) 0:00:34.487 *********** 2025-06-02 01:02:37.551947 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 01:02:37.551960 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 01:02:37.551974 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 01:02:37.551987 | orchestrator | 2025-06-02 01:02:37.552001 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-02 01:02:37.552012 | orchestrator | Monday 02 June 2025 01:02:07 +0000 (0:00:01.356) 0:00:35.844 *********** 2025-06-02 01:02:37.552023 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:37.552034 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:02:37.552045 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:02:37.552056 | orchestrator | 2025-06-02 01:02:37.552068 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-02 01:02:37.552078 | orchestrator | Monday 02 June 2025 01:02:09 +0000 (0:00:01.990) 0:00:37.834 *********** 2025-06-02 01:02:37.552090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 01:02:37.552102 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:02:37.552113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 01:02:37.552130 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:02:37.552149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 01:02:37.552161 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:02:37.552172 | orchestrator | 2025-06-02 01:02:37.552183 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-02 01:02:37.552194 | orchestrator | Monday 02 June 2025 01:02:10 +0000 (0:00:00.758) 0:00:38.592 *********** 2025-06-02 01:02:37.552216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.552228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.552240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 01:02:37.552258 | orchestrator | 2025-06-02 01:02:37.552269 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-02 01:02:37.552299 | orchestrator | Monday 02 June 2025 01:02:12 +0000 (0:00:01.661) 0:00:40.254 *********** 2025-06-02 01:02:37.552310 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:37.552322 | orchestrator | 2025-06-02 01:02:37.552332 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-02 01:02:37.552343 | orchestrator | Monday 02 June 2025 01:02:14 +0000 (0:00:02.054) 0:00:42.308 *********** 2025-06-02 01:02:37.552354 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:37.552365 | orchestrator | 2025-06-02 01:02:37.552376 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-02 01:02:37.552387 | orchestrator | Monday 02 June 2025 01:02:16 +0000 (0:00:02.286) 0:00:44.595 *********** 2025-06-02 01:02:37.552404 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:37.552416 | orchestrator | 2025-06-02 01:02:37.552427 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 01:02:37.552438 | orchestrator | Monday 02 June 2025 01:02:29 +0000 (0:00:12.952) 0:00:57.547 *********** 2025-06-02 01:02:37.552449 | orchestrator | 2025-06-02 01:02:37.552460 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 01:02:37.552471 | orchestrator | Monday 02 June 2025 01:02:29 +0000 (0:00:00.068) 0:00:57.616 *********** 2025-06-02 01:02:37.552482 | orchestrator | 2025-06-02 01:02:37.552492 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 01:02:37.552503 | orchestrator | Monday 02 June 2025 01:02:29 +0000 (0:00:00.064) 0:00:57.680 *********** 2025-06-02 01:02:37.552514 | orchestrator | 2025-06-02 01:02:37.552525 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-02 01:02:37.552536 | orchestrator | Monday 02 June 2025 01:02:29 +0000 (0:00:00.076) 0:00:57.757 *********** 2025-06-02 01:02:37.552547 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:02:37.552558 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:02:37.552569 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:02:37.552580 | orchestrator | 2025-06-02 01:02:37.552591 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:02:37.552607 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 01:02:37.552620 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 01:02:37.552631 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 01:02:37.552642 | orchestrator | 2025-06-02 01:02:37.552653 | orchestrator | 2025-06-02 01:02:37.552664 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:02:37.552674 | orchestrator | Monday 02 June 2025 01:02:35 +0000 (0:00:05.478) 0:01:03.235 *********** 2025-06-02 01:02:37.552685 | orchestrator | =============================================================================== 2025-06-02 01:02:37.552696 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.95s 2025-06-02 01:02:37.552707 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.29s 2025-06-02 01:02:37.552718 | orchestrator | placement : Restart placement-api container ----------------------------- 5.48s 2025-06-02 01:02:37.552729 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.16s 2025-06-02 01:02:37.552740 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.84s 2025-06-02 01:02:37.552751 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.38s 2025-06-02 01:02:37.552761 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.12s 2025-06-02 01:02:37.552778 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.09s 2025-06-02 01:02:37.552789 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.54s 2025-06-02 01:02:37.552800 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.29s 2025-06-02 01:02:37.552811 | orchestrator | placement : Creating placement databases -------------------------------- 2.05s 2025-06-02 01:02:37.552822 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.99s 2025-06-02 01:02:37.552833 | orchestrator | placement : Check placement containers ---------------------------------- 1.66s 2025-06-02 01:02:37.552844 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.57s 2025-06-02 01:02:37.552855 | orchestrator | placement : Copying over config.json files for services ----------------- 1.40s 2025-06-02 01:02:37.552865 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.36s 2025-06-02 01:02:37.552876 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.79s 2025-06-02 01:02:37.552887 | orchestrator | placement : Copying over existing policy file --------------------------- 0.76s 2025-06-02 01:02:37.552898 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.72s 2025-06-02 01:02:37.552909 | orchestrator | placement : include_tasks ----------------------------------------------- 0.58s 2025-06-02 01:02:37.552920 | orchestrator | 2025-06-02 01:02:37 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:02:37.552931 | orchestrator | 2025-06-02 01:02:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:40.583159 | orchestrator | 2025-06-02 01:02:40 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:02:40.584233 | orchestrator | 2025-06-02 01:02:40 | INFO  | Task ed7a0a99-eee4-42a6-bef8-37abfa453606 is in state STARTED 2025-06-02 01:02:40.586268 | orchestrator | 2025-06-02 01:02:40 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:40.586901 | orchestrator | 2025-06-02 01:02:40 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:02:40.587054 | orchestrator | 2025-06-02 01:02:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:43.620139 | orchestrator | 2025-06-02 01:02:43 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:02:43.620850 | orchestrator | 2025-06-02 01:02:43 | INFO  | Task ed7a0a99-eee4-42a6-bef8-37abfa453606 is in state SUCCESS 2025-06-02 01:02:43.624632 | orchestrator | 2025-06-02 01:02:43 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:43.626633 | orchestrator | 2025-06-02 01:02:43 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:02:43.628852 | orchestrator | 2025-06-02 01:02:43 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:02:43.629002 | orchestrator | 2025-06-02 01:02:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:46.662346 | orchestrator | 2025-06-02 01:02:46 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:02:46.662789 | orchestrator | 2025-06-02 01:02:46 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:46.662816 | orchestrator | 2025-06-02 01:02:46 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:02:46.663541 | orchestrator | 2025-06-02 01:02:46 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:02:46.663562 | orchestrator | 2025-06-02 01:02:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:49.698727 | orchestrator | 2025-06-02 01:02:49 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:02:49.700158 | orchestrator | 2025-06-02 01:02:49 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:49.702275 | orchestrator | 2025-06-02 01:02:49 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:02:49.703511 | orchestrator | 2025-06-02 01:02:49 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:02:49.703537 | orchestrator | 2025-06-02 01:02:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:52.746432 | orchestrator | 2025-06-02 01:02:52 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:02:52.747206 | orchestrator | 2025-06-02 01:02:52 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:52.747896 | orchestrator | 2025-06-02 01:02:52 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:02:52.748852 | orchestrator | 2025-06-02 01:02:52 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:02:52.748867 | orchestrator | 2025-06-02 01:02:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:55.795802 | orchestrator | 2025-06-02 01:02:55 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:02:55.797678 | orchestrator | 2025-06-02 01:02:55 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:55.799067 | orchestrator | 2025-06-02 01:02:55 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:02:55.800447 | orchestrator | 2025-06-02 01:02:55 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:02:55.800466 | orchestrator | 2025-06-02 01:02:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:02:58.842391 | orchestrator | 2025-06-02 01:02:58 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:02:58.843617 | orchestrator | 2025-06-02 01:02:58 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:02:58.845515 | orchestrator | 2025-06-02 01:02:58 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:02:58.846856 | orchestrator | 2025-06-02 01:02:58 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:02:58.846882 | orchestrator | 2025-06-02 01:02:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:01.892688 | orchestrator | 2025-06-02 01:03:01 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:01.894638 | orchestrator | 2025-06-02 01:03:01 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:01.896374 | orchestrator | 2025-06-02 01:03:01 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:01.898132 | orchestrator | 2025-06-02 01:03:01 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:01.898164 | orchestrator | 2025-06-02 01:03:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:04.944516 | orchestrator | 2025-06-02 01:03:04 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:04.946159 | orchestrator | 2025-06-02 01:03:04 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:04.948101 | orchestrator | 2025-06-02 01:03:04 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:04.949960 | orchestrator | 2025-06-02 01:03:04 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:04.950064 | orchestrator | 2025-06-02 01:03:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:07.992504 | orchestrator | 2025-06-02 01:03:07 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:07.992849 | orchestrator | 2025-06-02 01:03:07 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:07.994138 | orchestrator | 2025-06-02 01:03:07 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:07.995702 | orchestrator | 2025-06-02 01:03:07 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:07.995741 | orchestrator | 2025-06-02 01:03:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:11.045563 | orchestrator | 2025-06-02 01:03:11 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:11.046471 | orchestrator | 2025-06-02 01:03:11 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:11.046666 | orchestrator | 2025-06-02 01:03:11 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:11.047517 | orchestrator | 2025-06-02 01:03:11 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:11.047643 | orchestrator | 2025-06-02 01:03:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:14.085201 | orchestrator | 2025-06-02 01:03:14 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:14.085366 | orchestrator | 2025-06-02 01:03:14 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:14.086779 | orchestrator | 2025-06-02 01:03:14 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:14.088312 | orchestrator | 2025-06-02 01:03:14 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:14.088507 | orchestrator | 2025-06-02 01:03:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:17.128889 | orchestrator | 2025-06-02 01:03:17 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:17.129004 | orchestrator | 2025-06-02 01:03:17 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:17.131723 | orchestrator | 2025-06-02 01:03:17 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:17.131791 | orchestrator | 2025-06-02 01:03:17 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:17.131812 | orchestrator | 2025-06-02 01:03:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:20.176531 | orchestrator | 2025-06-02 01:03:20 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:20.176926 | orchestrator | 2025-06-02 01:03:20 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:20.179036 | orchestrator | 2025-06-02 01:03:20 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:20.180404 | orchestrator | 2025-06-02 01:03:20 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:20.180438 | orchestrator | 2025-06-02 01:03:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:23.214968 | orchestrator | 2025-06-02 01:03:23 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:23.215204 | orchestrator | 2025-06-02 01:03:23 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:23.215829 | orchestrator | 2025-06-02 01:03:23 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:23.216430 | orchestrator | 2025-06-02 01:03:23 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:23.216470 | orchestrator | 2025-06-02 01:03:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:26.250818 | orchestrator | 2025-06-02 01:03:26 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:26.252496 | orchestrator | 2025-06-02 01:03:26 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:26.253814 | orchestrator | 2025-06-02 01:03:26 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:26.255117 | orchestrator | 2025-06-02 01:03:26 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:26.255146 | orchestrator | 2025-06-02 01:03:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:29.296492 | orchestrator | 2025-06-02 01:03:29 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:29.297899 | orchestrator | 2025-06-02 01:03:29 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:29.300294 | orchestrator | 2025-06-02 01:03:29 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:29.301678 | orchestrator | 2025-06-02 01:03:29 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:29.301931 | orchestrator | 2025-06-02 01:03:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:32.345371 | orchestrator | 2025-06-02 01:03:32 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:32.347560 | orchestrator | 2025-06-02 01:03:32 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:32.348692 | orchestrator | 2025-06-02 01:03:32 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:32.350640 | orchestrator | 2025-06-02 01:03:32 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:32.350856 | orchestrator | 2025-06-02 01:03:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:35.408983 | orchestrator | 2025-06-02 01:03:35 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:35.410418 | orchestrator | 2025-06-02 01:03:35 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:35.411979 | orchestrator | 2025-06-02 01:03:35 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:35.413506 | orchestrator | 2025-06-02 01:03:35 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:35.413718 | orchestrator | 2025-06-02 01:03:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:38.466847 | orchestrator | 2025-06-02 01:03:38 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:38.467771 | orchestrator | 2025-06-02 01:03:38 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:38.469648 | orchestrator | 2025-06-02 01:03:38 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:38.473537 | orchestrator | 2025-06-02 01:03:38 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:38.473593 | orchestrator | 2025-06-02 01:03:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:41.521954 | orchestrator | 2025-06-02 01:03:41 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:41.523450 | orchestrator | 2025-06-02 01:03:41 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:41.525486 | orchestrator | 2025-06-02 01:03:41 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:41.527116 | orchestrator | 2025-06-02 01:03:41 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:41.527149 | orchestrator | 2025-06-02 01:03:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:44.568934 | orchestrator | 2025-06-02 01:03:44 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:44.569062 | orchestrator | 2025-06-02 01:03:44 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:44.569811 | orchestrator | 2025-06-02 01:03:44 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:44.570569 | orchestrator | 2025-06-02 01:03:44 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:44.571315 | orchestrator | 2025-06-02 01:03:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:47.612825 | orchestrator | 2025-06-02 01:03:47 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:47.614341 | orchestrator | 2025-06-02 01:03:47 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:47.615572 | orchestrator | 2025-06-02 01:03:47 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:47.616990 | orchestrator | 2025-06-02 01:03:47 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:47.617028 | orchestrator | 2025-06-02 01:03:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:50.655419 | orchestrator | 2025-06-02 01:03:50 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:50.657387 | orchestrator | 2025-06-02 01:03:50 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:50.659119 | orchestrator | 2025-06-02 01:03:50 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:50.661391 | orchestrator | 2025-06-02 01:03:50 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:50.661426 | orchestrator | 2025-06-02 01:03:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:53.708644 | orchestrator | 2025-06-02 01:03:53 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:53.711493 | orchestrator | 2025-06-02 01:03:53 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:53.711657 | orchestrator | 2025-06-02 01:03:53 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:53.714328 | orchestrator | 2025-06-02 01:03:53 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:53.714339 | orchestrator | 2025-06-02 01:03:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:56.756973 | orchestrator | 2025-06-02 01:03:56 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:56.759866 | orchestrator | 2025-06-02 01:03:56 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:56.762388 | orchestrator | 2025-06-02 01:03:56 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:56.764349 | orchestrator | 2025-06-02 01:03:56 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:56.764414 | orchestrator | 2025-06-02 01:03:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:03:59.812412 | orchestrator | 2025-06-02 01:03:59 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:03:59.813697 | orchestrator | 2025-06-02 01:03:59 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:03:59.814941 | orchestrator | 2025-06-02 01:03:59 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:03:59.815993 | orchestrator | 2025-06-02 01:03:59 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:03:59.816019 | orchestrator | 2025-06-02 01:03:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:02.858751 | orchestrator | 2025-06-02 01:04:02 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:04:02.859625 | orchestrator | 2025-06-02 01:04:02 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:02.860459 | orchestrator | 2025-06-02 01:04:02 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:02.861296 | orchestrator | 2025-06-02 01:04:02 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:02.861323 | orchestrator | 2025-06-02 01:04:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:05.889809 | orchestrator | 2025-06-02 01:04:05 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:04:05.890572 | orchestrator | 2025-06-02 01:04:05 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:05.891132 | orchestrator | 2025-06-02 01:04:05 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:05.891708 | orchestrator | 2025-06-02 01:04:05 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:05.891849 | orchestrator | 2025-06-02 01:04:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:08.914865 | orchestrator | 2025-06-02 01:04:08 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:04:08.915649 | orchestrator | 2025-06-02 01:04:08 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:08.916293 | orchestrator | 2025-06-02 01:04:08 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:08.917963 | orchestrator | 2025-06-02 01:04:08 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:08.918155 | orchestrator | 2025-06-02 01:04:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:11.944651 | orchestrator | 2025-06-02 01:04:11 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:04:11.946927 | orchestrator | 2025-06-02 01:04:11 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:11.949248 | orchestrator | 2025-06-02 01:04:11 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:11.949275 | orchestrator | 2025-06-02 01:04:11 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:11.949287 | orchestrator | 2025-06-02 01:04:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:14.975770 | orchestrator | 2025-06-02 01:04:14 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:04:14.976716 | orchestrator | 2025-06-02 01:04:14 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:14.977418 | orchestrator | 2025-06-02 01:04:14 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:14.978749 | orchestrator | 2025-06-02 01:04:14 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:14.979046 | orchestrator | 2025-06-02 01:04:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:18.028703 | orchestrator | 2025-06-02 01:04:18 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state STARTED 2025-06-02 01:04:18.029738 | orchestrator | 2025-06-02 01:04:18 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:18.031581 | orchestrator | 2025-06-02 01:04:18 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:18.032885 | orchestrator | 2025-06-02 01:04:18 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:18.032917 | orchestrator | 2025-06-02 01:04:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:21.075105 | orchestrator | 2025-06-02 01:04:21 | INFO  | Task f3716380-6c62-4502-aff1-395ae38395bd is in state SUCCESS 2025-06-02 01:04:21.077343 | orchestrator | 2025-06-02 01:04:21.077395 | orchestrator | 2025-06-02 01:04:21.077403 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 01:04:21.077410 | orchestrator | 2025-06-02 01:04:21.077416 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 01:04:21.077422 | orchestrator | Monday 02 June 2025 01:02:39 +0000 (0:00:00.223) 0:00:00.223 *********** 2025-06-02 01:04:21.077428 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:21.077435 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:04:21.077441 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:04:21.077447 | orchestrator | 2025-06-02 01:04:21.077453 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 01:04:21.077458 | orchestrator | Monday 02 June 2025 01:02:39 +0000 (0:00:00.262) 0:00:00.485 *********** 2025-06-02 01:04:21.077464 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-02 01:04:21.077470 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-02 01:04:21.077476 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-02 01:04:21.077481 | orchestrator | 2025-06-02 01:04:21.077486 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-02 01:04:21.077492 | orchestrator | 2025-06-02 01:04:21.077497 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-02 01:04:21.077503 | orchestrator | Monday 02 June 2025 01:02:40 +0000 (0:00:00.589) 0:00:01.075 *********** 2025-06-02 01:04:21.077508 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:04:21.077514 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:04:21.077520 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:21.077525 | orchestrator | 2025-06-02 01:04:21.077531 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:04:21.077537 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 01:04:21.077545 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 01:04:21.077550 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 01:04:21.077556 | orchestrator | 2025-06-02 01:04:21.077561 | orchestrator | 2025-06-02 01:04:21.077567 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:04:21.077581 | orchestrator | Monday 02 June 2025 01:02:41 +0000 (0:00:00.775) 0:00:01.850 *********** 2025-06-02 01:04:21.077587 | orchestrator | =============================================================================== 2025-06-02 01:04:21.077592 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.78s 2025-06-02 01:04:21.077598 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-06-02 01:04:21.077620 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-06-02 01:04:21.077626 | orchestrator | 2025-06-02 01:04:21.077632 | orchestrator | 2025-06-02 01:04:21.077637 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 01:04:21.077643 | orchestrator | 2025-06-02 01:04:21.077648 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 01:04:21.077654 | orchestrator | Monday 02 June 2025 01:02:31 +0000 (0:00:00.286) 0:00:00.286 *********** 2025-06-02 01:04:21.077659 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:21.077665 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:04:21.077671 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:04:21.077676 | orchestrator | 2025-06-02 01:04:21.077682 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 01:04:21.077687 | orchestrator | Monday 02 June 2025 01:02:32 +0000 (0:00:00.327) 0:00:00.613 *********** 2025-06-02 01:04:21.077693 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-02 01:04:21.077698 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-02 01:04:21.077704 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-02 01:04:21.077709 | orchestrator | 2025-06-02 01:04:21.077715 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-02 01:04:21.077720 | orchestrator | 2025-06-02 01:04:21.077726 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 01:04:21.077740 | orchestrator | Monday 02 June 2025 01:02:32 +0000 (0:00:00.442) 0:00:01.056 *********** 2025-06-02 01:04:21.077746 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:04:21.077751 | orchestrator | 2025-06-02 01:04:21.077757 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-02 01:04:21.077762 | orchestrator | Monday 02 June 2025 01:02:33 +0000 (0:00:00.666) 0:00:01.722 *********** 2025-06-02 01:04:21.077768 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-02 01:04:21.077774 | orchestrator | 2025-06-02 01:04:21.077779 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-02 01:04:21.077785 | orchestrator | Monday 02 June 2025 01:02:36 +0000 (0:00:03.304) 0:00:05.027 *********** 2025-06-02 01:04:21.077790 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-02 01:04:21.077796 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-02 01:04:21.077802 | orchestrator | 2025-06-02 01:04:21.077807 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-02 01:04:21.077813 | orchestrator | Monday 02 June 2025 01:02:43 +0000 (0:00:06.843) 0:00:11.870 *********** 2025-06-02 01:04:21.077819 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 01:04:21.077825 | orchestrator | 2025-06-02 01:04:21.077830 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-02 01:04:21.077836 | orchestrator | Monday 02 June 2025 01:02:46 +0000 (0:00:03.166) 0:00:15.037 *********** 2025-06-02 01:04:21.077849 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 01:04:21.077855 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-02 01:04:21.077861 | orchestrator | 2025-06-02 01:04:21.077866 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-02 01:04:21.077872 | orchestrator | Monday 02 June 2025 01:02:50 +0000 (0:00:03.896) 0:00:18.934 *********** 2025-06-02 01:04:21.077886 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 01:04:21.077893 | orchestrator | 2025-06-02 01:04:21.077900 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-02 01:04:21.077906 | orchestrator | Monday 02 June 2025 01:02:53 +0000 (0:00:03.268) 0:00:22.203 *********** 2025-06-02 01:04:21.077913 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-02 01:04:21.077924 | orchestrator | 2025-06-02 01:04:21.077931 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-02 01:04:21.077938 | orchestrator | Monday 02 June 2025 01:02:57 +0000 (0:00:03.808) 0:00:26.011 *********** 2025-06-02 01:04:21.077945 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:21.077951 | orchestrator | 2025-06-02 01:04:21.077958 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-02 01:04:21.077965 | orchestrator | Monday 02 June 2025 01:03:00 +0000 (0:00:03.066) 0:00:29.078 *********** 2025-06-02 01:04:21.077971 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:21.077978 | orchestrator | 2025-06-02 01:04:21.077984 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-02 01:04:21.077991 | orchestrator | Monday 02 June 2025 01:03:04 +0000 (0:00:03.599) 0:00:32.678 *********** 2025-06-02 01:04:21.077998 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:21.078004 | orchestrator | 2025-06-02 01:04:21.078010 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-02 01:04:21.078050 | orchestrator | Monday 02 June 2025 01:03:07 +0000 (0:00:03.403) 0:00:36.081 *********** 2025-06-02 01:04:21.078059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078121 | orchestrator | 2025-06-02 01:04:21.078128 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-02 01:04:21.078135 | orchestrator | Monday 02 June 2025 01:03:09 +0000 (0:00:01.356) 0:00:37.438 *********** 2025-06-02 01:04:21.078142 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:21.078148 | orchestrator | 2025-06-02 01:04:21.078169 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-02 01:04:21.078176 | orchestrator | Monday 02 June 2025 01:03:09 +0000 (0:00:00.120) 0:00:37.559 *********** 2025-06-02 01:04:21.078183 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:21.078189 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:21.078196 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:21.078203 | orchestrator | 2025-06-02 01:04:21.078209 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-02 01:04:21.078216 | orchestrator | Monday 02 June 2025 01:03:09 +0000 (0:00:00.463) 0:00:38.022 *********** 2025-06-02 01:04:21.078223 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 01:04:21.078229 | orchestrator | 2025-06-02 01:04:21.078239 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-02 01:04:21.078246 | orchestrator | Monday 02 June 2025 01:03:10 +0000 (0:00:00.803) 0:00:38.826 *********** 2025-06-02 01:04:21.078251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078311 | orchestrator | 2025-06-02 01:04:21.078317 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-02 01:04:21.078325 | orchestrator | Monday 02 June 2025 01:03:12 +0000 (0:00:02.473) 0:00:41.300 *********** 2025-06-02 01:04:21.078331 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:21.078337 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:04:21.078342 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:04:21.078348 | orchestrator | 2025-06-02 01:04:21.078353 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 01:04:21.078359 | orchestrator | Monday 02 June 2025 01:03:13 +0000 (0:00:00.301) 0:00:41.601 *********** 2025-06-02 01:04:21.078364 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:04:21.078370 | orchestrator | 2025-06-02 01:04:21.078375 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-02 01:04:21.078380 | orchestrator | Monday 02 June 2025 01:03:13 +0000 (0:00:00.630) 0:00:42.231 *********** 2025-06-02 01:04:21.078386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078432 | orchestrator | 2025-06-02 01:04:21.078438 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-02 01:04:21.078443 | orchestrator | Monday 02 June 2025 01:03:16 +0000 (0:00:02.196) 0:00:44.427 *********** 2025-06-02 01:04:21.078449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 01:04:21.078457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:21.078468 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:21.078491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 01:04:21.078497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:21.078503 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:21.078509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 01:04:21.078515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:21.078525 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:21.078530 | orchestrator | 2025-06-02 01:04:21.078536 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-02 01:04:21.078541 | orchestrator | Monday 02 June 2025 01:03:16 +0000 (0:00:00.601) 0:00:45.029 *********** 2025-06-02 01:04:21.078553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 01:04:21.078565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:21.078571 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:21.078577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 01:04:21.078583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:21.078589 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:21.078601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 01:04:21.078607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:21.078613 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:21.078618 | orchestrator | 2025-06-02 01:04:21.078624 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-02 01:04:21.078630 | orchestrator | Monday 02 June 2025 01:03:17 +0000 (0:00:01.306) 0:00:46.336 *********** 2025-06-02 01:04:21.078640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078686 | orchestrator | 2025-06-02 01:04:21.078692 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-02 01:04:21.078698 | orchestrator | Monday 02 June 2025 01:03:20 +0000 (0:00:02.450) 0:00:48.786 *********** 2025-06-02 01:04:21.078703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078753 | orchestrator | 2025-06-02 01:04:21.078759 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-02 01:04:21.078765 | orchestrator | Monday 02 June 2025 01:03:26 +0000 (0:00:06.248) 0:00:55.035 *********** 2025-06-02 01:04:21.078776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 01:04:21.078787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:21.078796 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:21.078810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 01:04:21.078819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:21.078835 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:21.078845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 01:04:21.078859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:21.078869 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:21.078875 | orchestrator | 2025-06-02 01:04:21.078880 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-02 01:04:21.078886 | orchestrator | Monday 02 June 2025 01:03:27 +0000 (0:00:00.750) 0:00:55.786 *********** 2025-06-02 01:04:21.078895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 01:04:21.078917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:21.078941 | orchestrator | 2025-06-02 01:04:21.078947 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 01:04:21.078953 | orchestrator | Monday 02 June 2025 01:03:29 +0000 (0:00:02.069) 0:00:57.855 *********** 2025-06-02 01:04:21.078958 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:21.078964 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:21.078970 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:21.078975 | orchestrator | 2025-06-02 01:04:21.078981 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-02 01:04:21.078986 | orchestrator | Monday 02 June 2025 01:03:29 +0000 (0:00:00.257) 0:00:58.113 *********** 2025-06-02 01:04:21.078992 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:21.078997 | orchestrator | 2025-06-02 01:04:21.079003 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-02 01:04:21.079012 | orchestrator | Monday 02 June 2025 01:03:31 +0000 (0:00:01.936) 0:01:00.050 *********** 2025-06-02 01:04:21.079017 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:21.079023 | orchestrator | 2025-06-02 01:04:21.079028 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-02 01:04:21.079034 | orchestrator | Monday 02 June 2025 01:03:33 +0000 (0:00:02.126) 0:01:02.176 *********** 2025-06-02 01:04:21.079039 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:21.079044 | orchestrator | 2025-06-02 01:04:21.079050 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 01:04:21.079055 | orchestrator | Monday 02 June 2025 01:03:52 +0000 (0:00:18.315) 0:01:20.492 *********** 2025-06-02 01:04:21.079061 | orchestrator | 2025-06-02 01:04:21.079066 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 01:04:21.079072 | orchestrator | Monday 02 June 2025 01:03:52 +0000 (0:00:00.055) 0:01:20.547 *********** 2025-06-02 01:04:21.079077 | orchestrator | 2025-06-02 01:04:21.079083 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 01:04:21.079088 | orchestrator | Monday 02 June 2025 01:03:52 +0000 (0:00:00.056) 0:01:20.603 *********** 2025-06-02 01:04:21.079093 | orchestrator | 2025-06-02 01:04:21.079099 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-02 01:04:21.079105 | orchestrator | Monday 02 June 2025 01:03:52 +0000 (0:00:00.056) 0:01:20.660 *********** 2025-06-02 01:04:21.079110 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:21.079115 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:04:21.079121 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:04:21.079127 | orchestrator | 2025-06-02 01:04:21.079132 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-02 01:04:21.079137 | orchestrator | Monday 02 June 2025 01:04:07 +0000 (0:00:14.804) 0:01:35.465 *********** 2025-06-02 01:04:21.079143 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:21.079148 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:04:21.079171 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:04:21.079177 | orchestrator | 2025-06-02 01:04:21.079183 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:04:21.079188 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 01:04:21.079196 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 01:04:21.079201 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 01:04:21.079207 | orchestrator | 2025-06-02 01:04:21.079212 | orchestrator | 2025-06-02 01:04:21.079218 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:04:21.079223 | orchestrator | Monday 02 June 2025 01:04:18 +0000 (0:00:11.541) 0:01:47.006 *********** 2025-06-02 01:04:21.079229 | orchestrator | =============================================================================== 2025-06-02 01:04:21.079234 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.32s 2025-06-02 01:04:21.079242 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.80s 2025-06-02 01:04:21.079248 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.54s 2025-06-02 01:04:21.079254 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.84s 2025-06-02 01:04:21.079259 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.25s 2025-06-02 01:04:21.079264 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.90s 2025-06-02 01:04:21.079270 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.81s 2025-06-02 01:04:21.079275 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.60s 2025-06-02 01:04:21.079286 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.40s 2025-06-02 01:04:21.079292 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.30s 2025-06-02 01:04:21.079297 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.27s 2025-06-02 01:04:21.079303 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.17s 2025-06-02 01:04:21.079308 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.07s 2025-06-02 01:04:21.079314 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.47s 2025-06-02 01:04:21.079319 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.45s 2025-06-02 01:04:21.079325 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.20s 2025-06-02 01:04:21.079333 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.13s 2025-06-02 01:04:21.079339 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.07s 2025-06-02 01:04:21.079345 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.94s 2025-06-02 01:04:21.079350 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.36s 2025-06-02 01:04:21.079356 | orchestrator | 2025-06-02 01:04:21 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:21.080440 | orchestrator | 2025-06-02 01:04:21 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:21.082461 | orchestrator | 2025-06-02 01:04:21 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:21.082502 | orchestrator | 2025-06-02 01:04:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:24.119366 | orchestrator | 2025-06-02 01:04:24 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:24.122081 | orchestrator | 2025-06-02 01:04:24 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:24.124759 | orchestrator | 2025-06-02 01:04:24 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:24.124829 | orchestrator | 2025-06-02 01:04:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:27.176191 | orchestrator | 2025-06-02 01:04:27 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:27.177404 | orchestrator | 2025-06-02 01:04:27 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:27.179284 | orchestrator | 2025-06-02 01:04:27 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:27.179333 | orchestrator | 2025-06-02 01:04:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:30.218513 | orchestrator | 2025-06-02 01:04:30 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:30.220446 | orchestrator | 2025-06-02 01:04:30 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:30.222228 | orchestrator | 2025-06-02 01:04:30 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:30.222435 | orchestrator | 2025-06-02 01:04:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:33.267721 | orchestrator | 2025-06-02 01:04:33 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:33.269679 | orchestrator | 2025-06-02 01:04:33 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:33.271814 | orchestrator | 2025-06-02 01:04:33 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:33.272084 | orchestrator | 2025-06-02 01:04:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:36.322439 | orchestrator | 2025-06-02 01:04:36 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:36.324035 | orchestrator | 2025-06-02 01:04:36 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:36.325199 | orchestrator | 2025-06-02 01:04:36 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:36.325505 | orchestrator | 2025-06-02 01:04:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:39.368788 | orchestrator | 2025-06-02 01:04:39 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:39.370646 | orchestrator | 2025-06-02 01:04:39 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:39.373036 | orchestrator | 2025-06-02 01:04:39 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:39.373125 | orchestrator | 2025-06-02 01:04:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:42.410094 | orchestrator | 2025-06-02 01:04:42 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:42.410883 | orchestrator | 2025-06-02 01:04:42 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:42.412495 | orchestrator | 2025-06-02 01:04:42 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:42.412518 | orchestrator | 2025-06-02 01:04:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:45.460784 | orchestrator | 2025-06-02 01:04:45 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state STARTED 2025-06-02 01:04:45.461253 | orchestrator | 2025-06-02 01:04:45 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:45.462908 | orchestrator | 2025-06-02 01:04:45 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:45.462931 | orchestrator | 2025-06-02 01:04:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:48.513481 | orchestrator | 2025-06-02 01:04:48 | INFO  | Task d0d011c1-fe8a-43e0-86f1-d1fd80735c30 is in state SUCCESS 2025-06-02 01:04:48.516015 | orchestrator | 2025-06-02 01:04:48.516064 | orchestrator | 2025-06-02 01:04:48.516077 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 01:04:48.516089 | orchestrator | 2025-06-02 01:04:48.516101 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-02 01:04:48.516112 | orchestrator | Monday 02 June 2025 00:56:14 +0000 (0:00:00.341) 0:00:00.341 *********** 2025-06-02 01:04:48.516162 | orchestrator | changed: [testbed-manager] 2025-06-02 01:04:48.516189 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.516202 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:04:48.516213 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:04:48.516224 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:04:48.516236 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:04:48.516247 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:04:48.516258 | orchestrator | 2025-06-02 01:04:48.516269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 01:04:48.516280 | orchestrator | Monday 02 June 2025 00:56:14 +0000 (0:00:00.744) 0:00:01.085 *********** 2025-06-02 01:04:48.516292 | orchestrator | changed: [testbed-manager] 2025-06-02 01:04:48.516303 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.516314 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:04:48.516324 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:04:48.516335 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:04:48.516346 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:04:48.516385 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:04:48.516396 | orchestrator | 2025-06-02 01:04:48.516407 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 01:04:48.516418 | orchestrator | Monday 02 June 2025 00:56:15 +0000 (0:00:00.617) 0:00:01.703 *********** 2025-06-02 01:04:48.516429 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-02 01:04:48.516441 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-02 01:04:48.516451 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-02 01:04:48.516462 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-02 01:04:48.516473 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-02 01:04:48.516484 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-02 01:04:48.516494 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-02 01:04:48.516505 | orchestrator | 2025-06-02 01:04:48.516516 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-02 01:04:48.516527 | orchestrator | 2025-06-02 01:04:48.516538 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-02 01:04:48.516548 | orchestrator | Monday 02 June 2025 00:56:16 +0000 (0:00:00.819) 0:00:02.522 *********** 2025-06-02 01:04:48.516559 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:04:48.516570 | orchestrator | 2025-06-02 01:04:48.516581 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-02 01:04:48.516592 | orchestrator | Monday 02 June 2025 00:56:16 +0000 (0:00:00.528) 0:00:03.050 *********** 2025-06-02 01:04:48.516605 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-02 01:04:48.516618 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-02 01:04:48.516632 | orchestrator | 2025-06-02 01:04:48.516645 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-02 01:04:48.516657 | orchestrator | Monday 02 June 2025 00:56:20 +0000 (0:00:03.423) 0:00:06.474 *********** 2025-06-02 01:04:48.516671 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 01:04:48.516683 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 01:04:48.516696 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.516710 | orchestrator | 2025-06-02 01:04:48.516737 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-02 01:04:48.516751 | orchestrator | Monday 02 June 2025 00:56:23 +0000 (0:00:03.423) 0:00:09.898 *********** 2025-06-02 01:04:48.516765 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.516777 | orchestrator | 2025-06-02 01:04:48.516790 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-02 01:04:48.516803 | orchestrator | Monday 02 June 2025 00:56:24 +0000 (0:00:00.676) 0:00:10.574 *********** 2025-06-02 01:04:48.516816 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.516829 | orchestrator | 2025-06-02 01:04:48.516842 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-02 01:04:48.516855 | orchestrator | Monday 02 June 2025 00:56:25 +0000 (0:00:01.437) 0:00:12.011 *********** 2025-06-02 01:04:48.516867 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.516880 | orchestrator | 2025-06-02 01:04:48.516894 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 01:04:48.516906 | orchestrator | Monday 02 June 2025 00:56:28 +0000 (0:00:02.837) 0:00:14.849 *********** 2025-06-02 01:04:48.516920 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.516932 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.516945 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.516959 | orchestrator | 2025-06-02 01:04:48.516972 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-02 01:04:48.516983 | orchestrator | Monday 02 June 2025 00:56:29 +0000 (0:00:00.413) 0:00:15.262 *********** 2025-06-02 01:04:48.516994 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:48.517013 | orchestrator | 2025-06-02 01:04:48.517024 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-02 01:04:48.517035 | orchestrator | Monday 02 June 2025 00:56:55 +0000 (0:00:26.050) 0:00:41.313 *********** 2025-06-02 01:04:48.517046 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.517056 | orchestrator | 2025-06-02 01:04:48.517067 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 01:04:48.517078 | orchestrator | Monday 02 June 2025 00:57:08 +0000 (0:00:12.985) 0:00:54.298 *********** 2025-06-02 01:04:48.517089 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:48.517099 | orchestrator | 2025-06-02 01:04:48.517110 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 01:04:48.517139 | orchestrator | Monday 02 June 2025 00:57:18 +0000 (0:00:10.059) 0:01:04.358 *********** 2025-06-02 01:04:48.517163 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:48.517174 | orchestrator | 2025-06-02 01:04:48.517185 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-02 01:04:48.517196 | orchestrator | Monday 02 June 2025 00:57:19 +0000 (0:00:01.166) 0:01:05.525 *********** 2025-06-02 01:04:48.517207 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.517217 | orchestrator | 2025-06-02 01:04:48.517228 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 01:04:48.517239 | orchestrator | Monday 02 June 2025 00:57:19 +0000 (0:00:00.443) 0:01:05.968 *********** 2025-06-02 01:04:48.517250 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:04:48.517261 | orchestrator | 2025-06-02 01:04:48.517273 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-02 01:04:48.517283 | orchestrator | Monday 02 June 2025 00:57:20 +0000 (0:00:00.516) 0:01:06.485 *********** 2025-06-02 01:04:48.517294 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:48.517305 | orchestrator | 2025-06-02 01:04:48.517316 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-02 01:04:48.517327 | orchestrator | Monday 02 June 2025 00:57:36 +0000 (0:00:16.634) 0:01:23.119 *********** 2025-06-02 01:04:48.517338 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.517349 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.517360 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.517371 | orchestrator | 2025-06-02 01:04:48.517382 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-02 01:04:48.517392 | orchestrator | 2025-06-02 01:04:48.517403 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-02 01:04:48.517414 | orchestrator | Monday 02 June 2025 00:57:37 +0000 (0:00:00.341) 0:01:23.460 *********** 2025-06-02 01:04:48.517425 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:04:48.517435 | orchestrator | 2025-06-02 01:04:48.517446 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-02 01:04:48.517457 | orchestrator | Monday 02 June 2025 00:57:37 +0000 (0:00:00.563) 0:01:24.023 *********** 2025-06-02 01:04:48.517468 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.517479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.517490 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.517501 | orchestrator | 2025-06-02 01:04:48.517512 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-02 01:04:48.517522 | orchestrator | Monday 02 June 2025 00:57:39 +0000 (0:00:01.916) 0:01:25.940 *********** 2025-06-02 01:04:48.517533 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.517544 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.517555 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.517566 | orchestrator | 2025-06-02 01:04:48.517577 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-02 01:04:48.517588 | orchestrator | Monday 02 June 2025 00:57:41 +0000 (0:00:01.936) 0:01:27.876 *********** 2025-06-02 01:04:48.517599 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.517616 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.517628 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.517638 | orchestrator | 2025-06-02 01:04:48.517649 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-02 01:04:48.517660 | orchestrator | Monday 02 June 2025 00:57:41 +0000 (0:00:00.336) 0:01:28.212 *********** 2025-06-02 01:04:48.517671 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 01:04:48.517682 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.517693 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 01:04:48.517705 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.517721 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 01:04:48.517732 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-02 01:04:48.517743 | orchestrator | 2025-06-02 01:04:48.517754 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-02 01:04:48.517765 | orchestrator | Monday 02 June 2025 00:57:50 +0000 (0:00:08.204) 0:01:36.417 *********** 2025-06-02 01:04:48.517776 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.517787 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.517798 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.517809 | orchestrator | 2025-06-02 01:04:48.517820 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-02 01:04:48.517831 | orchestrator | Monday 02 June 2025 00:57:50 +0000 (0:00:00.634) 0:01:37.052 *********** 2025-06-02 01:04:48.517841 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 01:04:48.517853 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.517864 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 01:04:48.517875 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.517886 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 01:04:48.517897 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.517908 | orchestrator | 2025-06-02 01:04:48.517919 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-02 01:04:48.517930 | orchestrator | Monday 02 June 2025 00:57:51 +0000 (0:00:00.635) 0:01:37.687 *********** 2025-06-02 01:04:48.517941 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.517952 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.517963 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.517973 | orchestrator | 2025-06-02 01:04:48.517984 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-02 01:04:48.517995 | orchestrator | Monday 02 June 2025 00:57:52 +0000 (0:00:00.622) 0:01:38.310 *********** 2025-06-02 01:04:48.518006 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.518057 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.518071 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.518082 | orchestrator | 2025-06-02 01:04:48.518093 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-02 01:04:48.518104 | orchestrator | Monday 02 June 2025 00:57:53 +0000 (0:00:00.914) 0:01:39.225 *********** 2025-06-02 01:04:48.518115 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.518147 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.518165 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.518176 | orchestrator | 2025-06-02 01:04:48.518187 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-02 01:04:48.518198 | orchestrator | Monday 02 June 2025 00:57:54 +0000 (0:00:01.860) 0:01:41.086 *********** 2025-06-02 01:04:48.518209 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.518220 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.518231 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:48.518243 | orchestrator | 2025-06-02 01:04:48.518254 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 01:04:48.518265 | orchestrator | Monday 02 June 2025 00:58:16 +0000 (0:00:21.441) 0:02:02.527 *********** 2025-06-02 01:04:48.518284 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.518295 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.518306 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:48.518317 | orchestrator | 2025-06-02 01:04:48.518328 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 01:04:48.518339 | orchestrator | Monday 02 June 2025 00:58:27 +0000 (0:00:11.491) 0:02:14.019 *********** 2025-06-02 01:04:48.518350 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:48.518361 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.518373 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.518383 | orchestrator | 2025-06-02 01:04:48.518395 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-02 01:04:48.518406 | orchestrator | Monday 02 June 2025 00:58:28 +0000 (0:00:00.838) 0:02:14.857 *********** 2025-06-02 01:04:48.518416 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.518427 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.518438 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.518449 | orchestrator | 2025-06-02 01:04:48.518460 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-02 01:04:48.518471 | orchestrator | Monday 02 June 2025 00:58:39 +0000 (0:00:10.722) 0:02:25.580 *********** 2025-06-02 01:04:48.518482 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.518493 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.518504 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.518515 | orchestrator | 2025-06-02 01:04:48.518526 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-02 01:04:48.518537 | orchestrator | Monday 02 June 2025 00:58:40 +0000 (0:00:01.424) 0:02:27.004 *********** 2025-06-02 01:04:48.518548 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.518559 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.518570 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.518581 | orchestrator | 2025-06-02 01:04:48.518592 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-02 01:04:48.518603 | orchestrator | 2025-06-02 01:04:48.518614 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 01:04:48.518625 | orchestrator | Monday 02 June 2025 00:58:41 +0000 (0:00:00.323) 0:02:27.328 *********** 2025-06-02 01:04:48.518636 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:04:48.518648 | orchestrator | 2025-06-02 01:04:48.518659 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-02 01:04:48.518670 | orchestrator | Monday 02 June 2025 00:58:41 +0000 (0:00:00.508) 0:02:27.837 *********** 2025-06-02 01:04:48.518681 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-02 01:04:48.518705 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-02 01:04:48.518726 | orchestrator | 2025-06-02 01:04:48.518737 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-02 01:04:48.518755 | orchestrator | Monday 02 June 2025 00:58:44 +0000 (0:00:03.106) 0:02:30.943 *********** 2025-06-02 01:04:48.518767 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-02 01:04:48.518779 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-02 01:04:48.518791 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-02 01:04:48.518802 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-02 01:04:48.518813 | orchestrator | 2025-06-02 01:04:48.518824 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-02 01:04:48.518835 | orchestrator | Monday 02 June 2025 00:58:51 +0000 (0:00:06.361) 0:02:37.305 *********** 2025-06-02 01:04:48.518845 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 01:04:48.518863 | orchestrator | 2025-06-02 01:04:48.518874 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-02 01:04:48.518885 | orchestrator | Monday 02 June 2025 00:58:54 +0000 (0:00:03.175) 0:02:40.480 *********** 2025-06-02 01:04:48.518896 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 01:04:48.518907 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-02 01:04:48.518918 | orchestrator | 2025-06-02 01:04:48.518929 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-02 01:04:48.518939 | orchestrator | Monday 02 June 2025 00:58:57 +0000 (0:00:03.733) 0:02:44.214 *********** 2025-06-02 01:04:48.518950 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 01:04:48.518961 | orchestrator | 2025-06-02 01:04:48.518972 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-02 01:04:48.518983 | orchestrator | Monday 02 June 2025 00:59:01 +0000 (0:00:03.210) 0:02:47.424 *********** 2025-06-02 01:04:48.518993 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-02 01:04:48.519004 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-02 01:04:48.519015 | orchestrator | 2025-06-02 01:04:48.519026 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-02 01:04:48.519051 | orchestrator | Monday 02 June 2025 00:59:08 +0000 (0:00:06.886) 0:02:54.310 *********** 2025-06-02 01:04:48.519068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.519091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.519105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.519176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.519193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.519205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.519217 | orchestrator | 2025-06-02 01:04:48.519228 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-02 01:04:48.519239 | orchestrator | Monday 02 June 2025 00:59:09 +0000 (0:00:01.166) 0:02:55.477 *********** 2025-06-02 01:04:48.519250 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.519261 | orchestrator | 2025-06-02 01:04:48.519272 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-02 01:04:48.519283 | orchestrator | Monday 02 June 2025 00:59:09 +0000 (0:00:00.124) 0:02:55.601 *********** 2025-06-02 01:04:48.519294 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.519305 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.519316 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.519327 | orchestrator | 2025-06-02 01:04:48.519338 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-02 01:04:48.519349 | orchestrator | Monday 02 June 2025 00:59:09 +0000 (0:00:00.474) 0:02:56.076 *********** 2025-06-02 01:04:48.519367 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 01:04:48.519378 | orchestrator | 2025-06-02 01:04:48.519389 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-02 01:04:48.519399 | orchestrator | Monday 02 June 2025 00:59:10 +0000 (0:00:01.111) 0:02:57.188 *********** 2025-06-02 01:04:48.519410 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.519421 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.519432 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.519443 | orchestrator | 2025-06-02 01:04:48.519459 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 01:04:48.519470 | orchestrator | Monday 02 June 2025 00:59:11 +0000 (0:00:00.323) 0:02:57.511 *********** 2025-06-02 01:04:48.519481 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:04:48.519492 | orchestrator | 2025-06-02 01:04:48.519503 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-02 01:04:48.519514 | orchestrator | Monday 02 June 2025 00:59:11 +0000 (0:00:00.604) 0:02:58.115 *********** 2025-06-02 01:04:48.519532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.519546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.519564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.519584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.519596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.519616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.519628 | orchestrator | 2025-06-02 01:04:48.519640 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-02 01:04:48.519651 | orchestrator | Monday 02 June 2025 00:59:14 +0000 (0:00:02.285) 0:03:00.400 *********** 2025-06-02 01:04:48.519663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 01:04:48.519693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.519705 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.519722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 01:04:48.519741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.519753 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.519765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 01:04:48.519784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.519796 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.519807 | orchestrator | 2025-06-02 01:04:48.519818 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-02 01:04:48.519829 | orchestrator | Monday 02 June 2025 00:59:15 +0000 (0:00:01.159) 0:03:01.560 *********** 2025-06-02 01:04:48.519846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 01:04:48.519858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.519870 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.519890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 01:04:48.519910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.519922 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.519938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 01:04:48.519951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.519962 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.519973 | orchestrator | 2025-06-02 01:04:48.519985 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-02 01:04:48.519995 | orchestrator | Monday 02 June 2025 00:59:17 +0000 (0:00:01.723) 0:03:03.284 *********** 2025-06-02 01:04:48.520015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.520035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.520053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.520072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.520085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.520103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.520115 | orchestrator | 2025-06-02 01:04:48.520143 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-02 01:04:48.520154 | orchestrator | Monday 02 June 2025 00:59:19 +0000 (0:00:02.608) 0:03:05.892 *********** 2025-06-02 01:04:48.520171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.520184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.520204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.520225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.520237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.520253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.520265 | orchestrator | 2025-06-02 01:04:48.520276 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-02 01:04:48.520287 | orchestrator | Monday 02 June 2025 00:59:27 +0000 (0:00:08.006) 0:03:13.899 *********** 2025-06-02 01:04:48.520305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 01:04:48.520319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 01:04:48.520338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.520350 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.520366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.520378 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.520390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 01:04:48.520411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.520428 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.520440 | orchestrator | 2025-06-02 01:04:48.520451 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-02 01:04:48.520462 | orchestrator | Monday 02 June 2025 00:59:28 +0000 (0:00:00.576) 0:03:14.475 *********** 2025-06-02 01:04:48.520473 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.520484 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:04:48.520495 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:04:48.520506 | orchestrator | 2025-06-02 01:04:48.520517 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-02 01:04:48.520529 | orchestrator | Monday 02 June 2025 00:59:30 +0000 (0:00:02.010) 0:03:16.486 *********** 2025-06-02 01:04:48.520540 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.520551 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.520562 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.520573 | orchestrator | 2025-06-02 01:04:48.520584 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-02 01:04:48.520595 | orchestrator | Monday 02 June 2025 00:59:30 +0000 (0:00:00.615) 0:03:17.101 *********** 2025-06-02 01:04:48.520607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.520628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.520659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 01:04:48.520672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.520684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.520701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.520712 | orchestrator | 2025-06-02 01:04:48.520723 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 01:04:48.520735 | orchestrator | Monday 02 June 2025 00:59:33 +0000 (0:00:02.527) 0:03:19.629 *********** 2025-06-02 01:04:48.520746 | orchestrator | 2025-06-02 01:04:48.520757 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 01:04:48.520768 | orchestrator | Monday 02 June 2025 00:59:33 +0000 (0:00:00.237) 0:03:19.866 *********** 2025-06-02 01:04:48.520778 | orchestrator | 2025-06-02 01:04:48.520789 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 01:04:48.520800 | orchestrator | Monday 02 June 2025 00:59:33 +0000 (0:00:00.142) 0:03:20.009 *********** 2025-06-02 01:04:48.520811 | orchestrator | 2025-06-02 01:04:48.520821 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-02 01:04:48.520838 | orchestrator | Monday 02 June 2025 00:59:34 +0000 (0:00:00.237) 0:03:20.246 *********** 2025-06-02 01:04:48.520849 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.520860 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:04:48.520871 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:04:48.520882 | orchestrator | 2025-06-02 01:04:48.520893 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-02 01:04:48.520904 | orchestrator | Monday 02 June 2025 00:59:54 +0000 (0:00:20.338) 0:03:40.585 *********** 2025-06-02 01:04:48.520915 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:04:48.520926 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.520937 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:04:48.520947 | orchestrator | 2025-06-02 01:04:48.520958 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-02 01:04:48.520969 | orchestrator | 2025-06-02 01:04:48.520980 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 01:04:48.520991 | orchestrator | Monday 02 June 2025 01:00:05 +0000 (0:00:11.052) 0:03:51.637 *********** 2025-06-02 01:04:48.521002 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:04:48.521014 | orchestrator | 2025-06-02 01:04:48.521031 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 01:04:48.521042 | orchestrator | Monday 02 June 2025 01:00:08 +0000 (0:00:02.699) 0:03:54.337 *********** 2025-06-02 01:04:48.521053 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.521064 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.521075 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.521086 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.521097 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.521108 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.521120 | orchestrator | 2025-06-02 01:04:48.521150 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-02 01:04:48.521162 | orchestrator | Monday 02 June 2025 01:00:10 +0000 (0:00:02.434) 0:03:56.771 *********** 2025-06-02 01:04:48.521173 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.521183 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.521194 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.521205 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 01:04:48.521216 | orchestrator | 2025-06-02 01:04:48.521227 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 01:04:48.521238 | orchestrator | Monday 02 June 2025 01:00:12 +0000 (0:00:01.931) 0:03:58.703 *********** 2025-06-02 01:04:48.521249 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-02 01:04:48.521260 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-02 01:04:48.521271 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-02 01:04:48.521282 | orchestrator | 2025-06-02 01:04:48.521293 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 01:04:48.521304 | orchestrator | Monday 02 June 2025 01:00:13 +0000 (0:00:00.780) 0:03:59.484 *********** 2025-06-02 01:04:48.521314 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-02 01:04:48.521325 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-02 01:04:48.521336 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-02 01:04:48.521347 | orchestrator | 2025-06-02 01:04:48.521358 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 01:04:48.521368 | orchestrator | Monday 02 June 2025 01:00:14 +0000 (0:00:01.564) 0:04:01.049 *********** 2025-06-02 01:04:48.521379 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-02 01:04:48.521390 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.521401 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-02 01:04:48.521419 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.521430 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-02 01:04:48.521441 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.521452 | orchestrator | 2025-06-02 01:04:48.521463 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-02 01:04:48.521474 | orchestrator | Monday 02 June 2025 01:00:15 +0000 (0:00:00.707) 0:04:01.756 *********** 2025-06-02 01:04:48.521485 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 01:04:48.521496 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 01:04:48.521507 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.521518 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 01:04:48.521529 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 01:04:48.521540 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.521551 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 01:04:48.521566 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 01:04:48.521578 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 01:04:48.521588 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 01:04:48.521599 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.521610 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 01:04:48.521621 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 01:04:48.521631 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 01:04:48.521642 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 01:04:48.521653 | orchestrator | 2025-06-02 01:04:48.521664 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-02 01:04:48.521675 | orchestrator | Monday 02 June 2025 01:00:16 +0000 (0:00:01.068) 0:04:02.825 *********** 2025-06-02 01:04:48.521685 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.521696 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.521707 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.521718 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:04:48.521729 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:04:48.521740 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:04:48.521751 | orchestrator | 2025-06-02 01:04:48.521761 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-02 01:04:48.521772 | orchestrator | Monday 02 June 2025 01:00:17 +0000 (0:00:01.397) 0:04:04.222 *********** 2025-06-02 01:04:48.521783 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.521794 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.521805 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.521816 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:04:48.521827 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:04:48.521837 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:04:48.521848 | orchestrator | 2025-06-02 01:04:48.521859 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-02 01:04:48.521870 | orchestrator | Monday 02 June 2025 01:00:20 +0000 (0:00:02.042) 0:04:06.264 *********** 2025-06-02 01:04:48.521889 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 01:04:48.521909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 01:04:48.521921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.521937 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 01:04:48.521949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.521969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 01:04:48.521981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 01:04:48.521999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522070 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522597 | orchestrator | 2025-06-02 01:04:48.522614 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 01:04:48.522626 | orchestrator | Monday 02 June 2025 01:00:23 +0000 (0:00:03.724) 0:04:09.989 *********** 2025-06-02 01:04:48.522654 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:04:48.522666 | orchestrator | 2025-06-02 01:04:48.522678 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-02 01:04:48.522688 | orchestrator | Monday 02 June 2025 01:00:25 +0000 (0:00:01.736) 0:04:11.726 *********** 2025-06-02 01:04:48.522701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522909 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.522950 | orchestrator | 2025-06-02 01:04:48.522961 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-02 01:04:48.522973 | orchestrator | Monday 02 June 2025 01:00:29 +0000 (0:00:04.250) 0:04:15.976 *********** 2025-06-02 01:04:48.522990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 01:04:48.523008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 01:04:48.523020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.523037 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.523052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 01:04:48.523063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 01:04:48.523075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.523086 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.523110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 01:04:48.523148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 01:04:48.523169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.523180 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.523192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 01:04:48.523204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.523215 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.523227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 01:04:48.523245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.523257 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.523274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 01:04:48.523292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.523303 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.523315 | orchestrator | 2025-06-02 01:04:48.523327 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-02 01:04:48.523338 | orchestrator | Monday 02 June 2025 01:00:34 +0000 (0:00:04.502) 0:04:20.478 *********** 2025-06-02 01:04:48.523349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 01:04:48.523361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.523372 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.523389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 01:04:48.523406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 01:04:48.523418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.523435 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.523447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 01:04:48.523459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 01:04:48.523470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.523482 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.523510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 01:04:48.523529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 01:04:48.523541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.523552 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.523564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 01:04:48.523575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.523587 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.523598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 01:04:48.523616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.523634 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.523645 | orchestrator | 2025-06-02 01:04:48.523656 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 01:04:48.523672 | orchestrator | Monday 02 June 2025 01:00:36 +0000 (0:00:02.675) 0:04:23.153 *********** 2025-06-02 01:04:48.523684 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.523695 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.523707 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.523718 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 01:04:48.523729 | orchestrator | 2025-06-02 01:04:48.523740 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-02 01:04:48.523751 | orchestrator | Monday 02 June 2025 01:00:39 +0000 (0:00:02.288) 0:04:25.442 *********** 2025-06-02 01:04:48.523762 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 01:04:48.523773 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 01:04:48.523784 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 01:04:48.523795 | orchestrator | 2025-06-02 01:04:48.523806 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-02 01:04:48.523817 | orchestrator | Monday 02 June 2025 01:00:41 +0000 (0:00:02.122) 0:04:27.564 *********** 2025-06-02 01:04:48.523828 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 01:04:48.523839 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 01:04:48.523850 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 01:04:48.523861 | orchestrator | 2025-06-02 01:04:48.523872 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-02 01:04:48.523883 | orchestrator | Monday 02 June 2025 01:00:42 +0000 (0:00:01.571) 0:04:29.136 *********** 2025-06-02 01:04:48.523894 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:04:48.523906 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:04:48.523917 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:04:48.523929 | orchestrator | 2025-06-02 01:04:48.523940 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-02 01:04:48.523951 | orchestrator | Monday 02 June 2025 01:00:44 +0000 (0:00:01.200) 0:04:30.336 *********** 2025-06-02 01:04:48.523962 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:04:48.523973 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:04:48.523984 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:04:48.523994 | orchestrator | 2025-06-02 01:04:48.524006 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-02 01:04:48.524017 | orchestrator | Monday 02 June 2025 01:00:44 +0000 (0:00:00.846) 0:04:31.183 *********** 2025-06-02 01:04:48.524028 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 01:04:48.524039 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 01:04:48.524050 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 01:04:48.524060 | orchestrator | 2025-06-02 01:04:48.524071 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-02 01:04:48.524082 | orchestrator | Monday 02 June 2025 01:00:46 +0000 (0:00:01.147) 0:04:32.330 *********** 2025-06-02 01:04:48.524093 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 01:04:48.524104 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 01:04:48.524115 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 01:04:48.524145 | orchestrator | 2025-06-02 01:04:48.524156 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-02 01:04:48.524180 | orchestrator | Monday 02 June 2025 01:00:47 +0000 (0:00:01.303) 0:04:33.633 *********** 2025-06-02 01:04:48.524192 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 01:04:48.524202 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 01:04:48.524220 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 01:04:48.524243 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-02 01:04:48.524254 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-02 01:04:48.524265 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-02 01:04:48.524276 | orchestrator | 2025-06-02 01:04:48.524287 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-02 01:04:48.524298 | orchestrator | Monday 02 June 2025 01:00:51 +0000 (0:00:04.336) 0:04:37.969 *********** 2025-06-02 01:04:48.524309 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.524320 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.524331 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.524342 | orchestrator | 2025-06-02 01:04:48.524353 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-02 01:04:48.524364 | orchestrator | Monday 02 June 2025 01:00:52 +0000 (0:00:00.312) 0:04:38.282 *********** 2025-06-02 01:04:48.524375 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.524385 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.524396 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.524407 | orchestrator | 2025-06-02 01:04:48.524418 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-02 01:04:48.524429 | orchestrator | Monday 02 June 2025 01:00:52 +0000 (0:00:00.456) 0:04:38.738 *********** 2025-06-02 01:04:48.524440 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:04:48.524451 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:04:48.524462 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:04:48.524473 | orchestrator | 2025-06-02 01:04:48.524491 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-02 01:04:48.524503 | orchestrator | Monday 02 June 2025 01:00:53 +0000 (0:00:01.325) 0:04:40.063 *********** 2025-06-02 01:04:48.524514 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 01:04:48.524525 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 01:04:48.524537 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 01:04:48.524553 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 01:04:48.524564 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 01:04:48.524575 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 01:04:48.524586 | orchestrator | 2025-06-02 01:04:48.524597 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-02 01:04:48.524608 | orchestrator | Monday 02 June 2025 01:00:56 +0000 (0:00:03.044) 0:04:43.107 *********** 2025-06-02 01:04:48.524619 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 01:04:48.524630 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 01:04:48.524641 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 01:04:48.524652 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 01:04:48.524663 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:04:48.524674 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 01:04:48.524685 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:04:48.524696 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 01:04:48.524707 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:04:48.524718 | orchestrator | 2025-06-02 01:04:48.524729 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-02 01:04:48.524746 | orchestrator | Monday 02 June 2025 01:00:59 +0000 (0:00:02.915) 0:04:46.023 *********** 2025-06-02 01:04:48.524757 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.524775 | orchestrator | 2025-06-02 01:04:48.524793 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-02 01:04:48.524813 | orchestrator | Monday 02 June 2025 01:00:59 +0000 (0:00:00.105) 0:04:46.128 *********** 2025-06-02 01:04:48.524832 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.524850 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.524868 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.524886 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.524904 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.524923 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.524943 | orchestrator | 2025-06-02 01:04:48.524962 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-02 01:04:48.524981 | orchestrator | Monday 02 June 2025 01:01:00 +0000 (0:00:00.778) 0:04:46.907 *********** 2025-06-02 01:04:48.525001 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 01:04:48.525021 | orchestrator | 2025-06-02 01:04:48.525040 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-02 01:04:48.525058 | orchestrator | Monday 02 June 2025 01:01:01 +0000 (0:00:00.636) 0:04:47.543 *********** 2025-06-02 01:04:48.525069 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.525080 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.525091 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.525102 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.525113 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.525156 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.525168 | orchestrator | 2025-06-02 01:04:48.525179 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-02 01:04:48.525190 | orchestrator | Monday 02 June 2025 01:01:01 +0000 (0:00:00.542) 0:04:48.086 *********** 2025-06-02 01:04:48.525201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525223 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525331 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525439 | orchestrator | 2025-06-02 01:04:48.525451 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-02 01:04:48.525462 | orchestrator | Monday 02 June 2025 01:01:06 +0000 (0:00:04.515) 0:04:52.601 *********** 2025-06-02 01:04:48.525474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 01:04:48.525486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 01:04:48.525497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 01:04:48.525514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 01:04:48.525531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 01:04:48.525549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 01:04:48.525561 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525572 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.525686 | orchestrator | 2025-06-02 01:04:48.525698 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-02 01:04:48.525709 | orchestrator | Monday 02 June 2025 01:01:12 +0000 (0:00:06.418) 0:04:59.019 *********** 2025-06-02 01:04:48.525720 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.525731 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.525748 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.525759 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.525770 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.525781 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.525792 | orchestrator | 2025-06-02 01:04:48.525809 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-02 01:04:48.525820 | orchestrator | Monday 02 June 2025 01:01:14 +0000 (0:00:01.507) 0:05:00.527 *********** 2025-06-02 01:04:48.525831 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 01:04:48.525842 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 01:04:48.525853 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 01:04:48.525864 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 01:04:48.525875 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.525886 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 01:04:48.525901 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 01:04:48.525912 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 01:04:48.525922 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 01:04:48.525933 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.525944 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 01:04:48.525955 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.525966 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 01:04:48.525977 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 01:04:48.525988 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 01:04:48.525999 | orchestrator | 2025-06-02 01:04:48.526010 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-02 01:04:48.526070 | orchestrator | Monday 02 June 2025 01:01:18 +0000 (0:00:04.183) 0:05:04.710 *********** 2025-06-02 01:04:48.526082 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.526093 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.526105 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.526116 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.526142 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.526153 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.526164 | orchestrator | 2025-06-02 01:04:48.526175 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-02 01:04:48.526186 | orchestrator | Monday 02 June 2025 01:01:19 +0000 (0:00:00.791) 0:05:05.502 *********** 2025-06-02 01:04:48.526197 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 01:04:48.526208 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 01:04:48.526219 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 01:04:48.526230 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 01:04:48.526241 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 01:04:48.526251 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 01:04:48.526262 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 01:04:48.526280 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 01:04:48.526291 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 01:04:48.526302 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 01:04:48.526313 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.526324 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 01:04:48.526335 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.526346 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 01:04:48.526357 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.526368 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 01:04:48.526379 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 01:04:48.526390 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 01:04:48.526401 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 01:04:48.526412 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 01:04:48.526506 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 01:04:48.526520 | orchestrator | 2025-06-02 01:04:48.526532 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-02 01:04:48.526543 | orchestrator | Monday 02 June 2025 01:01:24 +0000 (0:00:05.411) 0:05:10.913 *********** 2025-06-02 01:04:48.526553 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 01:04:48.526564 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 01:04:48.526575 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 01:04:48.526591 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 01:04:48.526603 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 01:04:48.526614 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 01:04:48.526625 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 01:04:48.526635 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 01:04:48.526646 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 01:04:48.526657 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 01:04:48.526667 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 01:04:48.526678 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 01:04:48.526689 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.526700 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 01:04:48.526710 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 01:04:48.526721 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.526732 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 01:04:48.526751 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 01:04:48.526761 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 01:04:48.526772 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.526783 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 01:04:48.526794 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 01:04:48.526804 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 01:04:48.526815 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 01:04:48.526826 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 01:04:48.526836 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 01:04:48.526847 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 01:04:48.526858 | orchestrator | 2025-06-02 01:04:48.526869 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-02 01:04:48.526880 | orchestrator | Monday 02 June 2025 01:01:32 +0000 (0:00:08.232) 0:05:19.146 *********** 2025-06-02 01:04:48.526891 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.526901 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.526912 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.526923 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.526934 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.526945 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.526955 | orchestrator | 2025-06-02 01:04:48.526966 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-02 01:04:48.526977 | orchestrator | Monday 02 June 2025 01:01:33 +0000 (0:00:00.737) 0:05:19.883 *********** 2025-06-02 01:04:48.526987 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.526998 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.527009 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.527020 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.527030 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.527041 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.527052 | orchestrator | 2025-06-02 01:04:48.527063 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-02 01:04:48.527073 | orchestrator | Monday 02 June 2025 01:01:34 +0000 (0:00:00.542) 0:05:20.426 *********** 2025-06-02 01:04:48.527084 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.527095 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.527105 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.527116 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:04:48.527143 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:04:48.527155 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:04:48.527166 | orchestrator | 2025-06-02 01:04:48.527177 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-02 01:04:48.527188 | orchestrator | Monday 02 June 2025 01:01:36 +0000 (0:00:02.364) 0:05:22.790 *********** 2025-06-02 01:04:48.527215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 01:04:48.527244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 01:04:48.527257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.527268 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.527280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 01:04:48.527292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 01:04:48.527309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.527329 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.527345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 01:04:48.527357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 01:04:48.527369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.527380 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.527392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 01:04:48.527403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.527414 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.527432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 01:04:48.527455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.527467 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.527478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 01:04:48.527490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 01:04:48.527501 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.527512 | orchestrator | 2025-06-02 01:04:48.527523 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-02 01:04:48.527534 | orchestrator | Monday 02 June 2025 01:01:38 +0000 (0:00:01.978) 0:05:24.769 *********** 2025-06-02 01:04:48.527546 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-02 01:04:48.527557 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-02 01:04:48.527568 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.527579 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-02 01:04:48.527590 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-02 01:04:48.527601 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.527612 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-02 01:04:48.527623 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-02 01:04:48.527634 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.527645 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-02 01:04:48.527656 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-02 01:04:48.527667 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.527678 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-02 01:04:48.527688 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-02 01:04:48.527699 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.527710 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-02 01:04:48.527728 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-02 01:04:48.527738 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.527749 | orchestrator | 2025-06-02 01:04:48.527761 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-02 01:04:48.527772 | orchestrator | Monday 02 June 2025 01:01:39 +0000 (0:00:00.982) 0:05:25.752 *********** 2025-06-02 01:04:48.527789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527806 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527971 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.527995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 01:04:48.528006 | orchestrator | 2025-06-02 01:04:48.528017 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 01:04:48.528029 | orchestrator | Monday 02 June 2025 01:01:43 +0000 (0:00:04.063) 0:05:29.815 *********** 2025-06-02 01:04:48.528040 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.528051 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.528062 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.528072 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.528083 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.528094 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.528105 | orchestrator | 2025-06-02 01:04:48.528116 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 01:04:48.528145 | orchestrator | Monday 02 June 2025 01:01:44 +0000 (0:00:00.488) 0:05:30.303 *********** 2025-06-02 01:04:48.528157 | orchestrator | 2025-06-02 01:04:48.528174 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 01:04:48.528185 | orchestrator | Monday 02 June 2025 01:01:44 +0000 (0:00:00.235) 0:05:30.539 *********** 2025-06-02 01:04:48.528196 | orchestrator | 2025-06-02 01:04:48.528207 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 01:04:48.528218 | orchestrator | Monday 02 June 2025 01:01:44 +0000 (0:00:00.120) 0:05:30.660 *********** 2025-06-02 01:04:48.528229 | orchestrator | 2025-06-02 01:04:48.528240 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 01:04:48.528251 | orchestrator | Monday 02 June 2025 01:01:44 +0000 (0:00:00.116) 0:05:30.776 *********** 2025-06-02 01:04:48.528262 | orchestrator | 2025-06-02 01:04:48.528273 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 01:04:48.528284 | orchestrator | Monday 02 June 2025 01:01:44 +0000 (0:00:00.115) 0:05:30.892 *********** 2025-06-02 01:04:48.528295 | orchestrator | 2025-06-02 01:04:48.528305 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 01:04:48.528316 | orchestrator | Monday 02 June 2025 01:01:44 +0000 (0:00:00.120) 0:05:31.013 *********** 2025-06-02 01:04:48.528327 | orchestrator | 2025-06-02 01:04:48.528338 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-02 01:04:48.528348 | orchestrator | Monday 02 June 2025 01:01:44 +0000 (0:00:00.122) 0:05:31.135 *********** 2025-06-02 01:04:48.528359 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.528370 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:04:48.528381 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:04:48.528392 | orchestrator | 2025-06-02 01:04:48.528403 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-02 01:04:48.528413 | orchestrator | Monday 02 June 2025 01:01:51 +0000 (0:00:06.820) 0:05:37.956 *********** 2025-06-02 01:04:48.528424 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.528435 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:04:48.528446 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:04:48.528457 | orchestrator | 2025-06-02 01:04:48.528468 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-02 01:04:48.528479 | orchestrator | Monday 02 June 2025 01:02:08 +0000 (0:00:16.707) 0:05:54.664 *********** 2025-06-02 01:04:48.528490 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:04:48.528506 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:04:48.528517 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:04:48.528528 | orchestrator | 2025-06-02 01:04:48.528539 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-02 01:04:48.528550 | orchestrator | Monday 02 June 2025 01:02:32 +0000 (0:00:24.217) 0:06:18.881 *********** 2025-06-02 01:04:48.528561 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:04:48.528572 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:04:48.528583 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:04:48.528594 | orchestrator | 2025-06-02 01:04:48.528605 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-02 01:04:48.528616 | orchestrator | Monday 02 June 2025 01:03:11 +0000 (0:00:38.675) 0:06:57.557 *********** 2025-06-02 01:04:48.528626 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-06-02 01:04:48.528642 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-06-02 01:04:48.528654 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-06-02 01:04:48.528665 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:04:48.528676 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:04:48.528687 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:04:48.528698 | orchestrator | 2025-06-02 01:04:48.528708 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-02 01:04:48.528719 | orchestrator | Monday 02 June 2025 01:03:17 +0000 (0:00:06.465) 0:07:04.022 *********** 2025-06-02 01:04:48.528736 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:04:48.528747 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:04:48.528758 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:04:48.528777 | orchestrator | 2025-06-02 01:04:48.528797 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-02 01:04:48.528815 | orchestrator | Monday 02 June 2025 01:03:18 +0000 (0:00:00.847) 0:07:04.870 *********** 2025-06-02 01:04:48.528833 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:04:48.528853 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:04:48.528873 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:04:48.528892 | orchestrator | 2025-06-02 01:04:48.528905 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-02 01:04:48.528917 | orchestrator | Monday 02 June 2025 01:03:42 +0000 (0:00:23.449) 0:07:28.319 *********** 2025-06-02 01:04:48.528927 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.528938 | orchestrator | 2025-06-02 01:04:48.528949 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-02 01:04:48.528959 | orchestrator | Monday 02 June 2025 01:03:42 +0000 (0:00:00.122) 0:07:28.441 *********** 2025-06-02 01:04:48.528970 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.528981 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.528992 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.529003 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.529014 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.529025 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-02 01:04:48.529036 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-02 01:04:48.529047 | orchestrator | 2025-06-02 01:04:48.529057 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-02 01:04:48.529068 | orchestrator | Monday 02 June 2025 01:04:02 +0000 (0:00:20.730) 0:07:49.172 *********** 2025-06-02 01:04:48.529079 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.529090 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.529100 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.529111 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.529178 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.529192 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.529203 | orchestrator | 2025-06-02 01:04:48.529219 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-02 01:04:48.529238 | orchestrator | Monday 02 June 2025 01:04:11 +0000 (0:00:08.922) 0:07:58.095 *********** 2025-06-02 01:04:48.529255 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.529272 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.529290 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.529310 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.529326 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.529338 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-06-02 01:04:48.529349 | orchestrator | 2025-06-02 01:04:48.529360 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 01:04:48.529371 | orchestrator | Monday 02 June 2025 01:04:15 +0000 (0:00:03.820) 0:08:01.916 *********** 2025-06-02 01:04:48.529382 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-02 01:04:48.529393 | orchestrator | 2025-06-02 01:04:48.529404 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 01:04:48.529415 | orchestrator | Monday 02 June 2025 01:04:27 +0000 (0:00:11.593) 0:08:13.509 *********** 2025-06-02 01:04:48.529425 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-02 01:04:48.529436 | orchestrator | 2025-06-02 01:04:48.529447 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-02 01:04:48.529458 | orchestrator | Monday 02 June 2025 01:04:28 +0000 (0:00:01.227) 0:08:14.737 *********** 2025-06-02 01:04:48.529478 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.529489 | orchestrator | 2025-06-02 01:04:48.529500 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-02 01:04:48.529510 | orchestrator | Monday 02 June 2025 01:04:29 +0000 (0:00:01.191) 0:08:15.928 *********** 2025-06-02 01:04:48.529521 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-02 01:04:48.529532 | orchestrator | 2025-06-02 01:04:48.529543 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-02 01:04:48.529562 | orchestrator | Monday 02 June 2025 01:04:39 +0000 (0:00:09.823) 0:08:25.752 *********** 2025-06-02 01:04:48.529572 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:04:48.529582 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:04:48.529592 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:04:48.529602 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:48.529613 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:04:48.529622 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:04:48.529632 | orchestrator | 2025-06-02 01:04:48.529642 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-02 01:04:48.529652 | orchestrator | 2025-06-02 01:04:48.529661 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-02 01:04:48.529671 | orchestrator | Monday 02 June 2025 01:04:41 +0000 (0:00:01.603) 0:08:27.355 *********** 2025-06-02 01:04:48.529681 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:48.529690 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:04:48.529701 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:04:48.529710 | orchestrator | 2025-06-02 01:04:48.529726 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-02 01:04:48.529736 | orchestrator | 2025-06-02 01:04:48.529745 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-02 01:04:48.529755 | orchestrator | Monday 02 June 2025 01:04:42 +0000 (0:00:01.006) 0:08:28.362 *********** 2025-06-02 01:04:48.529765 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.529774 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.529784 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.529794 | orchestrator | 2025-06-02 01:04:48.529804 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-02 01:04:48.529813 | orchestrator | 2025-06-02 01:04:48.529823 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-02 01:04:48.529833 | orchestrator | Monday 02 June 2025 01:04:42 +0000 (0:00:00.481) 0:08:28.844 *********** 2025-06-02 01:04:48.529842 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-02 01:04:48.529852 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-02 01:04:48.529862 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-02 01:04:48.529872 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-02 01:04:48.529881 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-02 01:04:48.529891 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-02 01:04:48.529901 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:04:48.529911 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-02 01:04:48.529920 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-02 01:04:48.529930 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-02 01:04:48.529940 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-02 01:04:48.529949 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-02 01:04:48.529959 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-02 01:04:48.529969 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-02 01:04:48.529978 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-02 01:04:48.529994 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-02 01:04:48.530054 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-02 01:04:48.530070 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-02 01:04:48.530080 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-02 01:04:48.530090 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:04:48.530100 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-02 01:04:48.530110 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-02 01:04:48.530119 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-02 01:04:48.530147 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-02 01:04:48.530157 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-02 01:04:48.530166 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-02 01:04:48.530176 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:04:48.530186 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-02 01:04:48.530195 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-02 01:04:48.530205 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-02 01:04:48.530214 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-02 01:04:48.530224 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-02 01:04:48.530234 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-02 01:04:48.530244 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.530254 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.530264 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-02 01:04:48.530273 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-02 01:04:48.530283 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-02 01:04:48.530293 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-02 01:04:48.530302 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-02 01:04:48.530312 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-02 01:04:48.530322 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.530332 | orchestrator | 2025-06-02 01:04:48.530341 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-02 01:04:48.530351 | orchestrator | 2025-06-02 01:04:48.530361 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-02 01:04:48.530371 | orchestrator | Monday 02 June 2025 01:04:43 +0000 (0:00:01.193) 0:08:30.037 *********** 2025-06-02 01:04:48.530380 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-02 01:04:48.530397 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-02 01:04:48.530408 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.530417 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-02 01:04:48.530427 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-02 01:04:48.530437 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.530446 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-02 01:04:48.530456 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-02 01:04:48.530466 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.530476 | orchestrator | 2025-06-02 01:04:48.530486 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-02 01:04:48.530495 | orchestrator | 2025-06-02 01:04:48.530505 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-02 01:04:48.530520 | orchestrator | Monday 02 June 2025 01:04:44 +0000 (0:00:00.688) 0:08:30.726 *********** 2025-06-02 01:04:48.530530 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.530539 | orchestrator | 2025-06-02 01:04:48.530549 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-02 01:04:48.530566 | orchestrator | 2025-06-02 01:04:48.530575 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-02 01:04:48.530585 | orchestrator | Monday 02 June 2025 01:04:45 +0000 (0:00:00.644) 0:08:31.371 *********** 2025-06-02 01:04:48.530595 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:48.530605 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:48.530614 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:48.530624 | orchestrator | 2025-06-02 01:04:48.530634 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:04:48.530644 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 01:04:48.530655 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-02 01:04:48.530665 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-02 01:04:48.530675 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-02 01:04:48.530685 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-02 01:04:48.530694 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-06-02 01:04:48.530704 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-02 01:04:48.530714 | orchestrator | 2025-06-02 01:04:48.530724 | orchestrator | 2025-06-02 01:04:48.530734 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:04:48.530743 | orchestrator | Monday 02 June 2025 01:04:45 +0000 (0:00:00.426) 0:08:31.797 *********** 2025-06-02 01:04:48.530753 | orchestrator | =============================================================================== 2025-06-02 01:04:48.530763 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 38.68s 2025-06-02 01:04:48.530772 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 26.05s 2025-06-02 01:04:48.530782 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.22s 2025-06-02 01:04:48.530791 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.45s 2025-06-02 01:04:48.530801 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.44s 2025-06-02 01:04:48.530810 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.73s 2025-06-02 01:04:48.530820 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.34s 2025-06-02 01:04:48.530830 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.71s 2025-06-02 01:04:48.530839 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.63s 2025-06-02 01:04:48.530849 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.99s 2025-06-02 01:04:48.530858 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.59s 2025-06-02 01:04:48.530868 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.49s 2025-06-02 01:04:48.530878 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.05s 2025-06-02 01:04:48.530887 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.72s 2025-06-02 01:04:48.530897 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.06s 2025-06-02 01:04:48.530907 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.82s 2025-06-02 01:04:48.530916 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.92s 2025-06-02 01:04:48.530932 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.23s 2025-06-02 01:04:48.530942 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.21s 2025-06-02 01:04:48.530951 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.01s 2025-06-02 01:04:48.530965 | orchestrator | 2025-06-02 01:04:48 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:48.530976 | orchestrator | 2025-06-02 01:04:48 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state STARTED 2025-06-02 01:04:48.530986 | orchestrator | 2025-06-02 01:04:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:51.558286 | orchestrator | 2025-06-02 01:04:51 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:51.558613 | orchestrator | 2025-06-02 01:04:51 | INFO  | Task 515b237f-52a3-4385-8c9e-cc3f63cef4c3 is in state SUCCESS 2025-06-02 01:04:51.560696 | orchestrator | 2025-06-02 01:04:51.560765 | orchestrator | 2025-06-02 01:04:51.560795 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 01:04:51.560809 | orchestrator | 2025-06-02 01:04:51.560820 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 01:04:51.560832 | orchestrator | Monday 02 June 2025 01:02:33 +0000 (0:00:00.251) 0:00:00.251 *********** 2025-06-02 01:04:51.560844 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:51.560858 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:04:51.560869 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:04:51.560880 | orchestrator | 2025-06-02 01:04:51.560891 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 01:04:51.560903 | orchestrator | Monday 02 June 2025 01:02:33 +0000 (0:00:00.365) 0:00:00.616 *********** 2025-06-02 01:04:51.560914 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-02 01:04:51.560926 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-02 01:04:51.560937 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-02 01:04:51.560948 | orchestrator | 2025-06-02 01:04:51.560959 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-02 01:04:51.560969 | orchestrator | 2025-06-02 01:04:51.560980 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-02 01:04:51.560991 | orchestrator | Monday 02 June 2025 01:02:33 +0000 (0:00:00.345) 0:00:00.962 *********** 2025-06-02 01:04:51.561002 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:04:51.561013 | orchestrator | 2025-06-02 01:04:51.561023 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-02 01:04:51.561034 | orchestrator | Monday 02 June 2025 01:02:34 +0000 (0:00:00.492) 0:00:01.455 *********** 2025-06-02 01:04:51.561048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.561064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.561097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.561110 | orchestrator | 2025-06-02 01:04:51.561156 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-02 01:04:51.561171 | orchestrator | Monday 02 June 2025 01:02:35 +0000 (0:00:00.881) 0:00:02.336 *********** 2025-06-02 01:04:51.561182 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-02 01:04:51.561194 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-02 01:04:51.561205 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 01:04:51.561216 | orchestrator | 2025-06-02 01:04:51.561227 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-02 01:04:51.561238 | orchestrator | Monday 02 June 2025 01:02:36 +0000 (0:00:00.956) 0:00:03.293 *********** 2025-06-02 01:04:51.561249 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:04:51.561262 | orchestrator | 2025-06-02 01:04:51.561282 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-02 01:04:51.561300 | orchestrator | Monday 02 June 2025 01:02:36 +0000 (0:00:00.694) 0:00:03.987 *********** 2025-06-02 01:04:51.562181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.562218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.562230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.562255 | orchestrator | 2025-06-02 01:04:51.562267 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-02 01:04:51.562279 | orchestrator | Monday 02 June 2025 01:02:38 +0000 (0:00:01.386) 0:00:05.373 *********** 2025-06-02 01:04:51.562291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 01:04:51.562302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 01:04:51.562323 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:51.562343 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:51.562428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 01:04:51.562451 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:51.562464 | orchestrator | 2025-06-02 01:04:51.562475 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-02 01:04:51.562486 | orchestrator | Monday 02 June 2025 01:02:38 +0000 (0:00:00.297) 0:00:05.671 *********** 2025-06-02 01:04:51.562497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 01:04:51.562510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 01:04:51.562530 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:51.562541 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:51.562582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 01:04:51.562594 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:51.562605 | orchestrator | 2025-06-02 01:04:51.562616 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-02 01:04:51.562627 | orchestrator | Monday 02 June 2025 01:02:39 +0000 (0:00:00.606) 0:00:06.278 *********** 2025-06-02 01:04:51.562638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.562686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.562706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.562718 | orchestrator | 2025-06-02 01:04:51.562729 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-02 01:04:51.562740 | orchestrator | Monday 02 June 2025 01:02:40 +0000 (0:00:01.176) 0:00:07.454 *********** 2025-06-02 01:04:51.562751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.562771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.562783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.562794 | orchestrator | 2025-06-02 01:04:51.562805 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-02 01:04:51.562816 | orchestrator | Monday 02 June 2025 01:02:41 +0000 (0:00:01.450) 0:00:08.904 *********** 2025-06-02 01:04:51.562827 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:51.562838 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:51.562849 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:51.562860 | orchestrator | 2025-06-02 01:04:51.562871 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-02 01:04:51.562882 | orchestrator | Monday 02 June 2025 01:02:42 +0000 (0:00:00.628) 0:00:09.533 *********** 2025-06-02 01:04:51.562893 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 01:04:51.562905 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 01:04:51.562916 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 01:04:51.562926 | orchestrator | 2025-06-02 01:04:51.562937 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-02 01:04:51.562948 | orchestrator | Monday 02 June 2025 01:02:43 +0000 (0:00:01.276) 0:00:10.810 *********** 2025-06-02 01:04:51.562959 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 01:04:51.562999 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 01:04:51.563017 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 01:04:51.563028 | orchestrator | 2025-06-02 01:04:51.563039 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-02 01:04:51.563050 | orchestrator | Monday 02 June 2025 01:02:44 +0000 (0:00:01.117) 0:00:11.927 *********** 2025-06-02 01:04:51.563068 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 01:04:51.563079 | orchestrator | 2025-06-02 01:04:51.563089 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-02 01:04:51.563100 | orchestrator | Monday 02 June 2025 01:02:45 +0000 (0:00:00.620) 0:00:12.547 *********** 2025-06-02 01:04:51.563111 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-02 01:04:51.563186 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-02 01:04:51.563199 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:51.563210 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:04:51.563222 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:04:51.563232 | orchestrator | 2025-06-02 01:04:51.563243 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-02 01:04:51.563254 | orchestrator | Monday 02 June 2025 01:02:46 +0000 (0:00:00.589) 0:00:13.137 *********** 2025-06-02 01:04:51.563265 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:51.563276 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:51.563287 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:51.563298 | orchestrator | 2025-06-02 01:04:51.563308 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-02 01:04:51.563319 | orchestrator | Monday 02 June 2025 01:02:46 +0000 (0:00:00.387) 0:00:13.525 *********** 2025-06-02 01:04:51.563331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084386, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0746534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084386, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0746534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084386, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0746534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084363, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0676534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084363, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0676534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084363, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0676534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1084352, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0656533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1084352, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0656533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1084352, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0656533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084380, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0716534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084380, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0716534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084380, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0716534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1084340, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0636532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1084340, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0636532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1084340, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0636532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084354, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0666533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084354, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0666533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084354, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0666533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084378, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0706534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084378, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0706534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084378, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0706534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1084337, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0626533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1084337, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0626533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1084337, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0626533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1084317, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0586534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1084317, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0586534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1084317, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0586534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1084343, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0646534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1084343, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0646534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1084343, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0646534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1084328, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0606532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1084328, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0606532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1084328, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0606532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084370, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0696535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084370, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0696535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084370, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0696535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1084344, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0646534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.563999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1084344, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0646534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1084344, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0646534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084383, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0716534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084383, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0716534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084383, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0716534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1084334, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0626533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1084334, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0626533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1084334, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0626533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084356, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0676534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084356, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0676534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084356, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0676534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1084319, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0606532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1084319, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0606532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1084319, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0606532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1084331, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0616534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1084331, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0616534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1084331, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0616534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1084348, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0656533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1084348, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0656533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1084348, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0656533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084451, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1016538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084451, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1016538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084451, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1016538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084439, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0886538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084439, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0886538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084439, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0886538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084393, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0756536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084393, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0756536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084393, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0756536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084505, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.106654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084505, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.106654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084505, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.106654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084397, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0766535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084397, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0766535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084397, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0766535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084501, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1046538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084501, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1046538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084501, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.1046538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084509, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.109654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084509, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.109654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084509, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.109654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084489, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.102654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084489, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.102654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084489, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.102654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084497, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.103654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084497, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.103654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084497, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.103654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084400, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0776534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084400, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0776534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084400, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0776534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084443, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0886538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084443, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0886538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084443, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0886538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084515, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.109654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084515, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.109654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084515, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.109654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084503, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.105654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084503, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.105654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084503, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.105654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084404, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0806537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084404, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0806537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084404, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0806537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084401, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0786536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084401, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0786536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084401, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0786536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084408, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0826535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084408, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0826535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084408, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0826535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084415, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0876536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084415, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0876536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.564989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084415, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0876536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084444, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0906537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084444, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0906537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084444, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0906537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084493, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.103654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084493, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.103654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084493, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.103654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084447, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0906537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084447, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0906537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084447, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.0906537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084520, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.110654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084520, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.110654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084520, 'dev': 109, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748823528.110654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 01:04:51.565177 | orchestrator | 2025-06-02 01:04:51.565187 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-02 01:04:51.565197 | orchestrator | Monday 02 June 2025 01:03:22 +0000 (0:00:35.722) 0:00:49.247 *********** 2025-06-02 01:04:51.565207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.565217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.565227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 01:04:51.565237 | orchestrator | 2025-06-02 01:04:51.565247 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-02 01:04:51.565256 | orchestrator | Monday 02 June 2025 01:03:23 +0000 (0:00:01.068) 0:00:50.316 *********** 2025-06-02 01:04:51.565267 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:51.565276 | orchestrator | 2025-06-02 01:04:51.565286 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-02 01:04:51.565301 | orchestrator | Monday 02 June 2025 01:03:25 +0000 (0:00:02.127) 0:00:52.444 *********** 2025-06-02 01:04:51.565322 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:51.565332 | orchestrator | 2025-06-02 01:04:51.565342 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 01:04:51.565352 | orchestrator | Monday 02 June 2025 01:03:27 +0000 (0:00:02.049) 0:00:54.494 *********** 2025-06-02 01:04:51.565361 | orchestrator | 2025-06-02 01:04:51.565371 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 01:04:51.565380 | orchestrator | Monday 02 June 2025 01:03:27 +0000 (0:00:00.165) 0:00:54.659 *********** 2025-06-02 01:04:51.565390 | orchestrator | 2025-06-02 01:04:51.565400 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 01:04:51.565409 | orchestrator | Monday 02 June 2025 01:03:27 +0000 (0:00:00.057) 0:00:54.716 *********** 2025-06-02 01:04:51.565419 | orchestrator | 2025-06-02 01:04:51.565428 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-02 01:04:51.565439 | orchestrator | Monday 02 June 2025 01:03:27 +0000 (0:00:00.058) 0:00:54.775 *********** 2025-06-02 01:04:51.565456 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:51.565473 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:51.565492 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:04:51.565510 | orchestrator | 2025-06-02 01:04:51.565526 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-02 01:04:51.565542 | orchestrator | Monday 02 June 2025 01:03:34 +0000 (0:00:06.875) 0:01:01.650 *********** 2025-06-02 01:04:51.565552 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:51.565562 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:51.565572 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-02 01:04:51.565581 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-02 01:04:51.565591 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-02 01:04:51.565601 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:51.565610 | orchestrator | 2025-06-02 01:04:51.565620 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-02 01:04:51.565630 | orchestrator | Monday 02 June 2025 01:04:12 +0000 (0:00:38.106) 0:01:39.756 *********** 2025-06-02 01:04:51.565639 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:51.565649 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:04:51.565658 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:04:51.565668 | orchestrator | 2025-06-02 01:04:51.565677 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-02 01:04:51.565687 | orchestrator | Monday 02 June 2025 01:04:46 +0000 (0:00:33.420) 0:02:13.176 *********** 2025-06-02 01:04:51.565696 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:04:51.565706 | orchestrator | 2025-06-02 01:04:51.565716 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-02 01:04:51.565725 | orchestrator | Monday 02 June 2025 01:04:48 +0000 (0:00:02.200) 0:02:15.376 *********** 2025-06-02 01:04:51.565734 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:51.565744 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:04:51.565754 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:04:51.565763 | orchestrator | 2025-06-02 01:04:51.565773 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-02 01:04:51.565782 | orchestrator | Monday 02 June 2025 01:04:48 +0000 (0:00:00.322) 0:02:15.699 *********** 2025-06-02 01:04:51.565792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-02 01:04:51.565803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-02 01:04:51.565821 | orchestrator | 2025-06-02 01:04:51.565832 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-02 01:04:51.565841 | orchestrator | Monday 02 June 2025 01:04:50 +0000 (0:00:02.271) 0:02:17.971 *********** 2025-06-02 01:04:51.565851 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:04:51.565860 | orchestrator | 2025-06-02 01:04:51.565870 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:04:51.565880 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 01:04:51.565890 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 01:04:51.565900 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 01:04:51.565910 | orchestrator | 2025-06-02 01:04:51.565920 | orchestrator | 2025-06-02 01:04:51.565929 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:04:51.565939 | orchestrator | Monday 02 June 2025 01:04:51 +0000 (0:00:00.255) 0:02:18.226 *********** 2025-06-02 01:04:51.565949 | orchestrator | =============================================================================== 2025-06-02 01:04:51.565964 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.11s 2025-06-02 01:04:51.565979 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 35.72s 2025-06-02 01:04:51.565988 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 33.42s 2025-06-02 01:04:51.565998 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.88s 2025-06-02 01:04:51.566008 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.27s 2025-06-02 01:04:51.566061 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.20s 2025-06-02 01:04:51.566074 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.13s 2025-06-02 01:04:51.566084 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.05s 2025-06-02 01:04:51.566094 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.45s 2025-06-02 01:04:51.566103 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.39s 2025-06-02 01:04:51.566113 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.28s 2025-06-02 01:04:51.566176 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.18s 2025-06-02 01:04:51.566188 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.12s 2025-06-02 01:04:51.566198 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.07s 2025-06-02 01:04:51.566207 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.96s 2025-06-02 01:04:51.566217 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.88s 2025-06-02 01:04:51.566227 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.69s 2025-06-02 01:04:51.566236 | orchestrator | grafana : Copying over extra configuration file ------------------------- 0.63s 2025-06-02 01:04:51.566246 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.62s 2025-06-02 01:04:51.566255 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.61s 2025-06-02 01:04:51.566265 | orchestrator | 2025-06-02 01:04:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:54.602671 | orchestrator | 2025-06-02 01:04:54 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:54.602821 | orchestrator | 2025-06-02 01:04:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:04:57.642300 | orchestrator | 2025-06-02 01:04:57 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:04:57.642480 | orchestrator | 2025-06-02 01:04:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:00.682772 | orchestrator | 2025-06-02 01:05:00 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:00.682894 | orchestrator | 2025-06-02 01:05:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:03.720362 | orchestrator | 2025-06-02 01:05:03 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:03.720471 | orchestrator | 2025-06-02 01:05:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:06.767865 | orchestrator | 2025-06-02 01:05:06 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:06.767970 | orchestrator | 2025-06-02 01:05:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:09.816636 | orchestrator | 2025-06-02 01:05:09 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:09.816739 | orchestrator | 2025-06-02 01:05:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:12.870254 | orchestrator | 2025-06-02 01:05:12 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:12.870365 | orchestrator | 2025-06-02 01:05:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:15.911254 | orchestrator | 2025-06-02 01:05:15 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:15.911362 | orchestrator | 2025-06-02 01:05:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:18.950862 | orchestrator | 2025-06-02 01:05:18 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:18.950969 | orchestrator | 2025-06-02 01:05:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:22.002504 | orchestrator | 2025-06-02 01:05:21 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:22.002603 | orchestrator | 2025-06-02 01:05:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:25.042291 | orchestrator | 2025-06-02 01:05:25 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:25.042397 | orchestrator | 2025-06-02 01:05:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:28.084157 | orchestrator | 2025-06-02 01:05:28 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:28.084288 | orchestrator | 2025-06-02 01:05:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:31.137609 | orchestrator | 2025-06-02 01:05:31 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:31.137719 | orchestrator | 2025-06-02 01:05:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:34.184554 | orchestrator | 2025-06-02 01:05:34 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:34.184685 | orchestrator | 2025-06-02 01:05:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:37.236776 | orchestrator | 2025-06-02 01:05:37 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:37.236863 | orchestrator | 2025-06-02 01:05:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:40.281478 | orchestrator | 2025-06-02 01:05:40 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:40.281682 | orchestrator | 2025-06-02 01:05:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:43.327985 | orchestrator | 2025-06-02 01:05:43 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:43.328121 | orchestrator | 2025-06-02 01:05:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:46.369602 | orchestrator | 2025-06-02 01:05:46 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:46.369704 | orchestrator | 2025-06-02 01:05:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:49.425989 | orchestrator | 2025-06-02 01:05:49 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:49.426203 | orchestrator | 2025-06-02 01:05:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:52.459823 | orchestrator | 2025-06-02 01:05:52 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:52.459922 | orchestrator | 2025-06-02 01:05:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:55.499116 | orchestrator | 2025-06-02 01:05:55 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:55.499222 | orchestrator | 2025-06-02 01:05:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:05:58.539152 | orchestrator | 2025-06-02 01:05:58 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:05:58.539252 | orchestrator | 2025-06-02 01:05:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:01.581879 | orchestrator | 2025-06-02 01:06:01 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:01.581987 | orchestrator | 2025-06-02 01:06:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:04.625197 | orchestrator | 2025-06-02 01:06:04 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:04.625303 | orchestrator | 2025-06-02 01:06:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:07.671206 | orchestrator | 2025-06-02 01:06:07 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:07.671365 | orchestrator | 2025-06-02 01:06:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:10.710516 | orchestrator | 2025-06-02 01:06:10 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:10.710626 | orchestrator | 2025-06-02 01:06:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:13.756085 | orchestrator | 2025-06-02 01:06:13 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:13.756197 | orchestrator | 2025-06-02 01:06:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:16.802166 | orchestrator | 2025-06-02 01:06:16 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:16.802287 | orchestrator | 2025-06-02 01:06:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:19.838381 | orchestrator | 2025-06-02 01:06:19 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:19.838489 | orchestrator | 2025-06-02 01:06:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:22.882178 | orchestrator | 2025-06-02 01:06:22 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:22.882252 | orchestrator | 2025-06-02 01:06:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:25.924023 | orchestrator | 2025-06-02 01:06:25 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:25.924181 | orchestrator | 2025-06-02 01:06:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:28.964963 | orchestrator | 2025-06-02 01:06:28 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:28.965137 | orchestrator | 2025-06-02 01:06:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:32.011731 | orchestrator | 2025-06-02 01:06:32 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:32.011933 | orchestrator | 2025-06-02 01:06:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:35.056115 | orchestrator | 2025-06-02 01:06:35 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:35.056213 | orchestrator | 2025-06-02 01:06:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:38.095552 | orchestrator | 2025-06-02 01:06:38 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:38.095655 | orchestrator | 2025-06-02 01:06:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:41.137214 | orchestrator | 2025-06-02 01:06:41 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:41.137317 | orchestrator | 2025-06-02 01:06:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:44.182897 | orchestrator | 2025-06-02 01:06:44 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:44.183051 | orchestrator | 2025-06-02 01:06:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:47.233582 | orchestrator | 2025-06-02 01:06:47 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:47.233652 | orchestrator | 2025-06-02 01:06:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:50.274316 | orchestrator | 2025-06-02 01:06:50 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:50.274416 | orchestrator | 2025-06-02 01:06:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:53.315203 | orchestrator | 2025-06-02 01:06:53 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:53.315312 | orchestrator | 2025-06-02 01:06:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:56.361796 | orchestrator | 2025-06-02 01:06:56 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:56.361875 | orchestrator | 2025-06-02 01:06:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:06:59.405816 | orchestrator | 2025-06-02 01:06:59 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:06:59.405923 | orchestrator | 2025-06-02 01:06:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:07:02.451870 | orchestrator | 2025-06-02 01:07:02 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:07:02.452019 | orchestrator | 2025-06-02 01:07:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:07:05.493479 | orchestrator | 2025-06-02 01:07:05 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:07:05.493572 | orchestrator | 2025-06-02 01:07:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:07:08.540570 | orchestrator | 2025-06-02 01:07:08 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:07:08.540706 | orchestrator | 2025-06-02 01:07:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:07:11.579251 | orchestrator | 2025-06-02 01:07:11 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:07:11.579386 | orchestrator | 2025-06-02 01:07:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:07:14.626308 | orchestrator | 2025-06-02 01:07:14 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state STARTED 2025-06-02 01:07:14.626400 | orchestrator | 2025-06-02 01:07:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 01:07:17.673332 | orchestrator | 2025-06-02 01:07:17 | INFO  | Task 7918b9cf-b8f7-4e28-8928-3b8d5bca98da is in state SUCCESS 2025-06-02 01:07:17.675912 | orchestrator | 2025-06-02 01:07:17.676016 | orchestrator | 2025-06-02 01:07:17.676032 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 01:07:17.676045 | orchestrator | 2025-06-02 01:07:17.676056 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 01:07:17.676068 | orchestrator | Monday 02 June 2025 01:02:44 +0000 (0:00:00.228) 0:00:00.228 *********** 2025-06-02 01:07:17.676457 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:07:17.676482 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:07:17.676496 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:07:17.676515 | orchestrator | 2025-06-02 01:07:17.676534 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 01:07:17.676572 | orchestrator | Monday 02 June 2025 01:02:45 +0000 (0:00:00.246) 0:00:00.474 *********** 2025-06-02 01:07:17.676592 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-02 01:07:17.676610 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-02 01:07:17.676628 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-02 01:07:17.676646 | orchestrator | 2025-06-02 01:07:17.676666 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-02 01:07:17.676685 | orchestrator | 2025-06-02 01:07:17.676705 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 01:07:17.676724 | orchestrator | Monday 02 June 2025 01:02:45 +0000 (0:00:00.353) 0:00:00.828 *********** 2025-06-02 01:07:17.676737 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:07:17.676749 | orchestrator | 2025-06-02 01:07:17.676760 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-02 01:07:17.676771 | orchestrator | Monday 02 June 2025 01:02:45 +0000 (0:00:00.471) 0:00:01.300 *********** 2025-06-02 01:07:17.676783 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-02 01:07:17.676793 | orchestrator | 2025-06-02 01:07:17.676804 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-02 01:07:17.676815 | orchestrator | Monday 02 June 2025 01:02:49 +0000 (0:00:03.165) 0:00:04.466 *********** 2025-06-02 01:07:17.676826 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-02 01:07:17.676837 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-02 01:07:17.676848 | orchestrator | 2025-06-02 01:07:17.676859 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-02 01:07:17.676870 | orchestrator | Monday 02 June 2025 01:02:55 +0000 (0:00:06.102) 0:00:10.568 *********** 2025-06-02 01:07:17.676880 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 01:07:17.676891 | orchestrator | 2025-06-02 01:07:17.676902 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-02 01:07:17.676913 | orchestrator | Monday 02 June 2025 01:02:58 +0000 (0:00:03.150) 0:00:13.719 *********** 2025-06-02 01:07:17.676923 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 01:07:17.676957 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-02 01:07:17.676969 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-02 01:07:17.676980 | orchestrator | 2025-06-02 01:07:17.676991 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-02 01:07:17.677042 | orchestrator | Monday 02 June 2025 01:03:06 +0000 (0:00:08.009) 0:00:21.728 *********** 2025-06-02 01:07:17.677062 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 01:07:17.677082 | orchestrator | 2025-06-02 01:07:17.677102 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-02 01:07:17.677122 | orchestrator | Monday 02 June 2025 01:03:09 +0000 (0:00:03.449) 0:00:25.178 *********** 2025-06-02 01:07:17.677141 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-02 01:07:17.677160 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-02 01:07:17.677178 | orchestrator | 2025-06-02 01:07:17.677196 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-02 01:07:17.677214 | orchestrator | Monday 02 June 2025 01:03:16 +0000 (0:00:06.982) 0:00:32.161 *********** 2025-06-02 01:07:17.677234 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-02 01:07:17.677254 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-02 01:07:17.677274 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-02 01:07:17.677294 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-02 01:07:17.677313 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-02 01:07:17.677332 | orchestrator | 2025-06-02 01:07:17.677344 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 01:07:17.677355 | orchestrator | Monday 02 June 2025 01:03:31 +0000 (0:00:15.119) 0:00:47.280 *********** 2025-06-02 01:07:17.677367 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:07:17.677377 | orchestrator | 2025-06-02 01:07:17.677389 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-02 01:07:17.677400 | orchestrator | Monday 02 June 2025 01:03:32 +0000 (0:00:00.557) 0:00:47.837 *********** 2025-06-02 01:07:17.677411 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.677422 | orchestrator | 2025-06-02 01:07:17.677433 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-06-02 01:07:17.677444 | orchestrator | Monday 02 June 2025 01:03:37 +0000 (0:00:05.009) 0:00:52.847 *********** 2025-06-02 01:07:17.677455 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.677466 | orchestrator | 2025-06-02 01:07:17.677494 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-02 01:07:17.677589 | orchestrator | Monday 02 June 2025 01:03:41 +0000 (0:00:04.318) 0:00:57.165 *********** 2025-06-02 01:07:17.677613 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:07:17.677634 | orchestrator | 2025-06-02 01:07:17.677654 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-06-02 01:07:17.677674 | orchestrator | Monday 02 June 2025 01:03:45 +0000 (0:00:03.199) 0:01:00.365 *********** 2025-06-02 01:07:17.677694 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-02 01:07:17.677713 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-02 01:07:17.677732 | orchestrator | 2025-06-02 01:07:17.677751 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-06-02 01:07:17.677782 | orchestrator | Monday 02 June 2025 01:03:54 +0000 (0:00:09.760) 0:01:10.126 *********** 2025-06-02 01:07:17.677802 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-06-02 01:07:17.677821 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-06-02 01:07:17.677843 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-06-02 01:07:17.677863 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-06-02 01:07:17.677898 | orchestrator | 2025-06-02 01:07:17.677918 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-06-02 01:07:17.677980 | orchestrator | Monday 02 June 2025 01:04:09 +0000 (0:00:14.742) 0:01:24.868 *********** 2025-06-02 01:07:17.678001 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.678084 | orchestrator | 2025-06-02 01:07:17.678106 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-06-02 01:07:17.678125 | orchestrator | Monday 02 June 2025 01:04:15 +0000 (0:00:06.026) 0:01:30.895 *********** 2025-06-02 01:07:17.678145 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.678162 | orchestrator | 2025-06-02 01:07:17.678182 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-06-02 01:07:17.678201 | orchestrator | Monday 02 June 2025 01:04:20 +0000 (0:00:04.915) 0:01:35.810 *********** 2025-06-02 01:07:17.678219 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:07:17.678238 | orchestrator | 2025-06-02 01:07:17.678257 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-06-02 01:07:17.678274 | orchestrator | Monday 02 June 2025 01:04:20 +0000 (0:00:00.211) 0:01:36.022 *********** 2025-06-02 01:07:17.678293 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.678311 | orchestrator | 2025-06-02 01:07:17.678329 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 01:07:17.678347 | orchestrator | Monday 02 June 2025 01:04:25 +0000 (0:00:04.803) 0:01:40.825 *********** 2025-06-02 01:07:17.678365 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:07:17.678384 | orchestrator | 2025-06-02 01:07:17.678403 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-06-02 01:07:17.678422 | orchestrator | Monday 02 June 2025 01:04:26 +0000 (0:00:01.144) 0:01:41.970 *********** 2025-06-02 01:07:17.678441 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.678462 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.678481 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.678500 | orchestrator | 2025-06-02 01:07:17.678518 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-06-02 01:07:17.678535 | orchestrator | Monday 02 June 2025 01:04:31 +0000 (0:00:05.211) 0:01:47.181 *********** 2025-06-02 01:07:17.678551 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.678570 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.678590 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.678609 | orchestrator | 2025-06-02 01:07:17.678627 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-06-02 01:07:17.678646 | orchestrator | Monday 02 June 2025 01:04:36 +0000 (0:00:04.343) 0:01:51.524 *********** 2025-06-02 01:07:17.678663 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.678681 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.678700 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.678719 | orchestrator | 2025-06-02 01:07:17.678740 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-06-02 01:07:17.678758 | orchestrator | Monday 02 June 2025 01:04:36 +0000 (0:00:00.763) 0:01:52.288 *********** 2025-06-02 01:07:17.678775 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:07:17.678793 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:07:17.678813 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:07:17.678833 | orchestrator | 2025-06-02 01:07:17.678853 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-06-02 01:07:17.678872 | orchestrator | Monday 02 June 2025 01:04:38 +0000 (0:00:01.882) 0:01:54.170 *********** 2025-06-02 01:07:17.678891 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.678911 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.678929 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.678975 | orchestrator | 2025-06-02 01:07:17.678994 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-06-02 01:07:17.679027 | orchestrator | Monday 02 June 2025 01:04:40 +0000 (0:00:01.357) 0:01:55.527 *********** 2025-06-02 01:07:17.679046 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.679066 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.679086 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.679104 | orchestrator | 2025-06-02 01:07:17.679124 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-06-02 01:07:17.679143 | orchestrator | Monday 02 June 2025 01:04:41 +0000 (0:00:01.136) 0:01:56.664 *********** 2025-06-02 01:07:17.679163 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.679182 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.679203 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.679222 | orchestrator | 2025-06-02 01:07:17.679326 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-06-02 01:07:17.679348 | orchestrator | Monday 02 June 2025 01:04:43 +0000 (0:00:01.844) 0:01:58.509 *********** 2025-06-02 01:07:17.679368 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.679388 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.679408 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.679428 | orchestrator | 2025-06-02 01:07:17.679448 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-06-02 01:07:17.679467 | orchestrator | Monday 02 June 2025 01:04:44 +0000 (0:00:01.742) 0:02:00.251 *********** 2025-06-02 01:07:17.679487 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:07:17.679518 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:07:17.679537 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:07:17.679554 | orchestrator | 2025-06-02 01:07:17.679571 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-06-02 01:07:17.679588 | orchestrator | Monday 02 June 2025 01:04:45 +0000 (0:00:00.608) 0:02:00.860 *********** 2025-06-02 01:07:17.679604 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:07:17.679622 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:07:17.679641 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:07:17.679657 | orchestrator | 2025-06-02 01:07:17.679675 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 01:07:17.679692 | orchestrator | Monday 02 June 2025 01:04:48 +0000 (0:00:02.716) 0:02:03.576 *********** 2025-06-02 01:07:17.679711 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:07:17.679728 | orchestrator | 2025-06-02 01:07:17.679745 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-06-02 01:07:17.679764 | orchestrator | Monday 02 June 2025 01:04:48 +0000 (0:00:00.711) 0:02:04.287 *********** 2025-06-02 01:07:17.679784 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:07:17.679803 | orchestrator | 2025-06-02 01:07:17.679823 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-02 01:07:17.679843 | orchestrator | Monday 02 June 2025 01:04:52 +0000 (0:00:03.342) 0:02:07.629 *********** 2025-06-02 01:07:17.679863 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:07:17.679881 | orchestrator | 2025-06-02 01:07:17.679901 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-06-02 01:07:17.679921 | orchestrator | Monday 02 June 2025 01:04:55 +0000 (0:00:02.984) 0:02:10.614 *********** 2025-06-02 01:07:17.679971 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-02 01:07:17.679990 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-02 01:07:17.680009 | orchestrator | 2025-06-02 01:07:17.680027 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-06-02 01:07:17.680046 | orchestrator | Monday 02 June 2025 01:05:01 +0000 (0:00:06.514) 0:02:17.128 *********** 2025-06-02 01:07:17.680064 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:07:17.680082 | orchestrator | 2025-06-02 01:07:17.680099 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-06-02 01:07:17.680117 | orchestrator | Monday 02 June 2025 01:05:05 +0000 (0:00:03.266) 0:02:20.394 *********** 2025-06-02 01:07:17.680151 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:07:17.680169 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:07:17.680187 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:07:17.680205 | orchestrator | 2025-06-02 01:07:17.680223 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-06-02 01:07:17.680240 | orchestrator | Monday 02 June 2025 01:05:05 +0000 (0:00:00.312) 0:02:20.706 *********** 2025-06-02 01:07:17.680262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.680351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.680383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.680403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.680422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.680452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.680471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.680490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.680555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.680575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.680592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.680622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.680642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.680661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.680720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.680739 | orchestrator | 2025-06-02 01:07:17.680755 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-06-02 01:07:17.680772 | orchestrator | Monday 02 June 2025 01:05:07 +0000 (0:00:02.516) 0:02:23.223 *********** 2025-06-02 01:07:17.680789 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:07:17.680805 | orchestrator | 2025-06-02 01:07:17.680821 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-06-02 01:07:17.680837 | orchestrator | Monday 02 June 2025 01:05:08 +0000 (0:00:00.301) 0:02:23.524 *********** 2025-06-02 01:07:17.680853 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:07:17.680875 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:07:17.680891 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:07:17.680906 | orchestrator | 2025-06-02 01:07:17.680922 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-06-02 01:07:17.681003 | orchestrator | Monday 02 June 2025 01:05:08 +0000 (0:00:00.277) 0:02:23.802 *********** 2025-06-02 01:07:17.681021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 01:07:17.681051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 01:07:17.681063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.681073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.681084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:07:17.681094 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:07:17.681148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 01:07:17.681172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 01:07:17.681183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.681193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.681203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:07:17.681213 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:07:17.681251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 01:07:17.681267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 01:07:17.681284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.681294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.681304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:07:17.681315 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:07:17.681324 | orchestrator | 2025-06-02 01:07:17.681334 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 01:07:17.681344 | orchestrator | Monday 02 June 2025 01:05:09 +0000 (0:00:00.631) 0:02:24.433 *********** 2025-06-02 01:07:17.681354 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 01:07:17.681364 | orchestrator | 2025-06-02 01:07:17.681374 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-06-02 01:07:17.681384 | orchestrator | Monday 02 June 2025 01:05:09 +0000 (0:00:00.506) 0:02:24.939 *********** 2025-06-02 01:07:17.681394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.681437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.681455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.681543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.681558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.681569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.681579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.681601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.681619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.681630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.681640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.681650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.681661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.681681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.681703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.681713 | orchestrator | 2025-06-02 01:07:17.681723 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-06-02 01:07:17.681733 | orchestrator | Monday 02 June 2025 01:05:14 +0000 (0:00:05.013) 0:02:29.953 *********** 2025-06-02 01:07:17.681744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 01:07:17.681754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 01:07:17.681765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.681775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.681792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:07:17.681808 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:07:17.681824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 01:07:17.681835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 01:07:17.681845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.681855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.681866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:07:17.681876 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:07:17.681903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 01:07:17.681914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 01:07:17.681925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.681989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.682001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:07:17.682041 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:07:17.682055 | orchestrator | 2025-06-02 01:07:17.682065 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-06-02 01:07:17.682075 | orchestrator | Monday 02 June 2025 01:05:15 +0000 (0:00:00.650) 0:02:30.603 *********** 2025-06-02 01:07:17.682085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 01:07:17.682110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 01:07:17.682126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.682137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.682147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:07:17.682157 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:07:17.682167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 01:07:17.682187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 01:07:17.682205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.682221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.682232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:07:17.682242 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:07:17.682252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 01:07:17.682262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 01:07:17.682278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.682296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 01:07:17.682311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 01:07:17.682322 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:07:17.682331 | orchestrator | 2025-06-02 01:07:17.682339 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-06-02 01:07:17.682347 | orchestrator | Monday 02 June 2025 01:05:16 +0000 (0:00:00.805) 0:02:31.408 *********** 2025-06-02 01:07:17.682355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.682364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.682377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.682391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.682404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.682412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.682421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682514 | orchestrator | 2025-06-02 01:07:17.682522 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-06-02 01:07:17.682530 | orchestrator | Monday 02 June 2025 01:05:21 +0000 (0:00:05.031) 0:02:36.440 *********** 2025-06-02 01:07:17.682538 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 01:07:17.682547 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 01:07:17.682555 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 01:07:17.682563 | orchestrator | 2025-06-02 01:07:17.682571 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-06-02 01:07:17.682579 | orchestrator | Monday 02 June 2025 01:05:22 +0000 (0:00:01.506) 0:02:37.946 *********** 2025-06-02 01:07:17.682591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.682604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.682613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.682626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.682634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.682643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.682656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.682750 | orchestrator | 2025-06-02 01:07:17.682759 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-06-02 01:07:17.682773 | orchestrator | Monday 02 June 2025 01:05:38 +0000 (0:00:15.522) 0:02:53.469 *********** 2025-06-02 01:07:17.682787 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.682801 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.682822 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.682835 | orchestrator | 2025-06-02 01:07:17.682848 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-06-02 01:07:17.682861 | orchestrator | Monday 02 June 2025 01:05:39 +0000 (0:00:01.431) 0:02:54.900 *********** 2025-06-02 01:07:17.682875 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 01:07:17.682889 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 01:07:17.682900 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 01:07:17.682912 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 01:07:17.682924 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 01:07:17.682956 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 01:07:17.682969 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 01:07:17.682980 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 01:07:17.682991 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 01:07:17.683004 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 01:07:17.683016 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 01:07:17.683028 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 01:07:17.683039 | orchestrator | 2025-06-02 01:07:17.683051 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-06-02 01:07:17.683063 | orchestrator | Monday 02 June 2025 01:05:44 +0000 (0:00:05.229) 0:03:00.129 *********** 2025-06-02 01:07:17.683074 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 01:07:17.683086 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 01:07:17.683099 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 01:07:17.683111 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 01:07:17.683123 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 01:07:17.683135 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 01:07:17.683147 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 01:07:17.683160 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 01:07:17.683173 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 01:07:17.683185 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 01:07:17.683197 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 01:07:17.683209 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 01:07:17.683221 | orchestrator | 2025-06-02 01:07:17.683233 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-06-02 01:07:17.683245 | orchestrator | Monday 02 June 2025 01:05:49 +0000 (0:00:04.936) 0:03:05.066 *********** 2025-06-02 01:07:17.683258 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 01:07:17.683269 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 01:07:17.683282 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 01:07:17.683294 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 01:07:17.683306 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 01:07:17.683318 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 01:07:17.683331 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 01:07:17.683353 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 01:07:17.683366 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 01:07:17.683378 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 01:07:17.683390 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 01:07:17.683417 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 01:07:17.683430 | orchestrator | 2025-06-02 01:07:17.683443 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-06-02 01:07:17.683457 | orchestrator | Monday 02 June 2025 01:05:54 +0000 (0:00:05.097) 0:03:10.163 *********** 2025-06-02 01:07:17.683479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.683496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.683510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 01:07:17.683525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.683545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.683566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 01:07:17.683575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.683584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.683592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.683601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.683609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.683631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 01:07:17.683644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.683652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.683661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 01:07:17.683669 | orchestrator | 2025-06-02 01:07:17.683677 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 01:07:17.683685 | orchestrator | Monday 02 June 2025 01:05:58 +0000 (0:00:03.646) 0:03:13.810 *********** 2025-06-02 01:07:17.683693 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:07:17.683701 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:07:17.683709 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:07:17.683717 | orchestrator | 2025-06-02 01:07:17.683725 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-06-02 01:07:17.683733 | orchestrator | Monday 02 June 2025 01:05:58 +0000 (0:00:00.275) 0:03:14.085 *********** 2025-06-02 01:07:17.683741 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.683749 | orchestrator | 2025-06-02 01:07:17.683757 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-06-02 01:07:17.683765 | orchestrator | Monday 02 June 2025 01:06:00 +0000 (0:00:01.970) 0:03:16.056 *********** 2025-06-02 01:07:17.683773 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.683781 | orchestrator | 2025-06-02 01:07:17.683789 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-06-02 01:07:17.683797 | orchestrator | Monday 02 June 2025 01:06:03 +0000 (0:00:02.283) 0:03:18.339 *********** 2025-06-02 01:07:17.683804 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.683812 | orchestrator | 2025-06-02 01:07:17.683822 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-06-02 01:07:17.683842 | orchestrator | Monday 02 June 2025 01:06:05 +0000 (0:00:02.159) 0:03:20.499 *********** 2025-06-02 01:07:17.683856 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.683869 | orchestrator | 2025-06-02 01:07:17.683882 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-06-02 01:07:17.683896 | orchestrator | Monday 02 June 2025 01:06:07 +0000 (0:00:02.154) 0:03:22.653 *********** 2025-06-02 01:07:17.683910 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.683923 | orchestrator | 2025-06-02 01:07:17.683955 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 01:07:17.683970 | orchestrator | Monday 02 June 2025 01:06:27 +0000 (0:00:20.137) 0:03:42.791 *********** 2025-06-02 01:07:17.683984 | orchestrator | 2025-06-02 01:07:17.683998 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 01:07:17.684013 | orchestrator | Monday 02 June 2025 01:06:27 +0000 (0:00:00.067) 0:03:42.859 *********** 2025-06-02 01:07:17.684027 | orchestrator | 2025-06-02 01:07:17.684041 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 01:07:17.684055 | orchestrator | Monday 02 June 2025 01:06:27 +0000 (0:00:00.062) 0:03:42.922 *********** 2025-06-02 01:07:17.684069 | orchestrator | 2025-06-02 01:07:17.684084 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-06-02 01:07:17.684108 | orchestrator | Monday 02 June 2025 01:06:27 +0000 (0:00:00.068) 0:03:42.990 *********** 2025-06-02 01:07:17.684124 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.684138 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.684152 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.684165 | orchestrator | 2025-06-02 01:07:17.684178 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-06-02 01:07:17.684192 | orchestrator | Monday 02 June 2025 01:06:43 +0000 (0:00:15.814) 0:03:58.804 *********** 2025-06-02 01:07:17.684205 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.684218 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.684232 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.684252 | orchestrator | 2025-06-02 01:07:17.684265 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-06-02 01:07:17.684278 | orchestrator | Monday 02 June 2025 01:06:54 +0000 (0:00:11.436) 0:04:10.241 *********** 2025-06-02 01:07:17.684290 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.684303 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.684316 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.684329 | orchestrator | 2025-06-02 01:07:17.684342 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-06-02 01:07:17.684356 | orchestrator | Monday 02 June 2025 01:07:05 +0000 (0:00:10.285) 0:04:20.527 *********** 2025-06-02 01:07:17.684370 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.684384 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.684398 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.684411 | orchestrator | 2025-06-02 01:07:17.684424 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-06-02 01:07:17.684439 | orchestrator | Monday 02 June 2025 01:07:10 +0000 (0:00:05.202) 0:04:25.730 *********** 2025-06-02 01:07:17.684452 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:07:17.684465 | orchestrator | changed: [testbed-node-2] 2025-06-02 01:07:17.684479 | orchestrator | changed: [testbed-node-1] 2025-06-02 01:07:17.684493 | orchestrator | 2025-06-02 01:07:17.684506 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:07:17.684520 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 01:07:17.684535 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 01:07:17.684559 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 01:07:17.684573 | orchestrator | 2025-06-02 01:07:17.684585 | orchestrator | 2025-06-02 01:07:17.684597 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:07:17.684610 | orchestrator | Monday 02 June 2025 01:07:15 +0000 (0:00:05.456) 0:04:31.186 *********** 2025-06-02 01:07:17.684624 | orchestrator | =============================================================================== 2025-06-02 01:07:17.684638 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.14s 2025-06-02 01:07:17.684651 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.81s 2025-06-02 01:07:17.684661 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.52s 2025-06-02 01:07:17.684669 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.12s 2025-06-02 01:07:17.684677 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.74s 2025-06-02 01:07:17.684685 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.44s 2025-06-02 01:07:17.684693 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.29s 2025-06-02 01:07:17.684701 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.76s 2025-06-02 01:07:17.684709 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.01s 2025-06-02 01:07:17.684717 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.98s 2025-06-02 01:07:17.684725 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.51s 2025-06-02 01:07:17.684732 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.10s 2025-06-02 01:07:17.684740 | orchestrator | octavia : Create loadbalancer management network ------------------------ 6.03s 2025-06-02 01:07:17.684748 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.46s 2025-06-02 01:07:17.684756 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.23s 2025-06-02 01:07:17.684764 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.21s 2025-06-02 01:07:17.684772 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.20s 2025-06-02 01:07:17.684780 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.10s 2025-06-02 01:07:17.684788 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.03s 2025-06-02 01:07:17.684796 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.01s 2025-06-02 01:07:17.684803 | orchestrator | 2025-06-02 01:07:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:20.713914 | orchestrator | 2025-06-02 01:07:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:23.762451 | orchestrator | 2025-06-02 01:07:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:26.801669 | orchestrator | 2025-06-02 01:07:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:29.845867 | orchestrator | 2025-06-02 01:07:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:32.891324 | orchestrator | 2025-06-02 01:07:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:35.935068 | orchestrator | 2025-06-02 01:07:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:38.977657 | orchestrator | 2025-06-02 01:07:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:42.023255 | orchestrator | 2025-06-02 01:07:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:45.073783 | orchestrator | 2025-06-02 01:07:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:48.118341 | orchestrator | 2025-06-02 01:07:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:51.166393 | orchestrator | 2025-06-02 01:07:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:54.226987 | orchestrator | 2025-06-02 01:07:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:07:57.263788 | orchestrator | 2025-06-02 01:07:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:08:00.306183 | orchestrator | 2025-06-02 01:08:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:08:03.353156 | orchestrator | 2025-06-02 01:08:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:08:06.413285 | orchestrator | 2025-06-02 01:08:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:08:09.468402 | orchestrator | 2025-06-02 01:08:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:08:12.511332 | orchestrator | 2025-06-02 01:08:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:08:15.551815 | orchestrator | 2025-06-02 01:08:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 01:08:18.601711 | orchestrator | 2025-06-02 01:08:19.032939 | orchestrator | 2025-06-02 01:08:19.037481 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Jun 2 01:08:19 UTC 2025 2025-06-02 01:08:19.037528 | orchestrator | 2025-06-02 01:08:19.348250 | orchestrator | ok: Runtime: 0:32:22.533922 2025-06-02 01:08:19.597206 | 2025-06-02 01:08:19.597353 | TASK [Bootstrap services] 2025-06-02 01:08:20.307455 | orchestrator | 2025-06-02 01:08:20.307640 | orchestrator | # BOOTSTRAP 2025-06-02 01:08:20.307665 | orchestrator | 2025-06-02 01:08:20.307682 | orchestrator | + set -e 2025-06-02 01:08:20.307699 | orchestrator | + echo 2025-06-02 01:08:20.307716 | orchestrator | + echo '# BOOTSTRAP' 2025-06-02 01:08:20.307736 | orchestrator | + echo 2025-06-02 01:08:20.307783 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-02 01:08:20.316899 | orchestrator | + set -e 2025-06-02 01:08:20.316961 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-02 01:08:22.181643 | orchestrator | 2025-06-02 01:08:22 | INFO  | It takes a moment until task 9e182f30-cdad-4a51-a588-43aba25cff17 (flavor-manager) has been started and output is visible here. 2025-06-02 01:08:25.598436 | orchestrator | 2025-06-02 01:08:25 | INFO  | Flavor SCS-1V-4 created 2025-06-02 01:08:25.800722 | orchestrator | 2025-06-02 01:08:25 | INFO  | Flavor SCS-2V-8 created 2025-06-02 01:08:26.006995 | orchestrator | 2025-06-02 01:08:25 | INFO  | Flavor SCS-4V-16 created 2025-06-02 01:08:26.129180 | orchestrator | 2025-06-02 01:08:26 | INFO  | Flavor SCS-8V-32 created 2025-06-02 01:08:26.246296 | orchestrator | 2025-06-02 01:08:26 | INFO  | Flavor SCS-1V-2 created 2025-06-02 01:08:26.386331 | orchestrator | 2025-06-02 01:08:26 | INFO  | Flavor SCS-2V-4 created 2025-06-02 01:08:26.505492 | orchestrator | 2025-06-02 01:08:26 | INFO  | Flavor SCS-4V-8 created 2025-06-02 01:08:26.636640 | orchestrator | 2025-06-02 01:08:26 | INFO  | Flavor SCS-8V-16 created 2025-06-02 01:08:26.778899 | orchestrator | 2025-06-02 01:08:26 | INFO  | Flavor SCS-16V-32 created 2025-06-02 01:08:26.892455 | orchestrator | 2025-06-02 01:08:26 | INFO  | Flavor SCS-1V-8 created 2025-06-02 01:08:27.008383 | orchestrator | 2025-06-02 01:08:27 | INFO  | Flavor SCS-2V-16 created 2025-06-02 01:08:27.134116 | orchestrator | 2025-06-02 01:08:27 | INFO  | Flavor SCS-4V-32 created 2025-06-02 01:08:27.265017 | orchestrator | 2025-06-02 01:08:27 | INFO  | Flavor SCS-1L-1 created 2025-06-02 01:08:27.406792 | orchestrator | 2025-06-02 01:08:27 | INFO  | Flavor SCS-2V-4-20s created 2025-06-02 01:08:27.545769 | orchestrator | 2025-06-02 01:08:27 | INFO  | Flavor SCS-4V-16-100s created 2025-06-02 01:08:27.683759 | orchestrator | 2025-06-02 01:08:27 | INFO  | Flavor SCS-1V-4-10 created 2025-06-02 01:08:27.810775 | orchestrator | 2025-06-02 01:08:27 | INFO  | Flavor SCS-2V-8-20 created 2025-06-02 01:08:27.931782 | orchestrator | 2025-06-02 01:08:27 | INFO  | Flavor SCS-4V-16-50 created 2025-06-02 01:08:28.044681 | orchestrator | 2025-06-02 01:08:28 | INFO  | Flavor SCS-8V-32-100 created 2025-06-02 01:08:28.161473 | orchestrator | 2025-06-02 01:08:28 | INFO  | Flavor SCS-1V-2-5 created 2025-06-02 01:08:28.269696 | orchestrator | 2025-06-02 01:08:28 | INFO  | Flavor SCS-2V-4-10 created 2025-06-02 01:08:28.409846 | orchestrator | 2025-06-02 01:08:28 | INFO  | Flavor SCS-4V-8-20 created 2025-06-02 01:08:28.515719 | orchestrator | 2025-06-02 01:08:28 | INFO  | Flavor SCS-8V-16-50 created 2025-06-02 01:08:28.623132 | orchestrator | 2025-06-02 01:08:28 | INFO  | Flavor SCS-16V-32-100 created 2025-06-02 01:08:28.740629 | orchestrator | 2025-06-02 01:08:28 | INFO  | Flavor SCS-1V-8-20 created 2025-06-02 01:08:28.861794 | orchestrator | 2025-06-02 01:08:28 | INFO  | Flavor SCS-2V-16-50 created 2025-06-02 01:08:28.976410 | orchestrator | 2025-06-02 01:08:28 | INFO  | Flavor SCS-4V-32-100 created 2025-06-02 01:08:29.106311 | orchestrator | 2025-06-02 01:08:29 | INFO  | Flavor SCS-1L-1-5 created 2025-06-02 01:08:31.219662 | orchestrator | 2025-06-02 01:08:31 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-02 01:08:31.224690 | orchestrator | Registering Redlock._acquired_script 2025-06-02 01:08:31.224754 | orchestrator | Registering Redlock._extend_script 2025-06-02 01:08:31.224798 | orchestrator | Registering Redlock._release_script 2025-06-02 01:08:31.281062 | orchestrator | 2025-06-02 01:08:31 | INFO  | Task e75cf844-d0f1-43a0-8ef0-d89dbe586d78 (bootstrap-basic) was prepared for execution. 2025-06-02 01:08:31.281127 | orchestrator | 2025-06-02 01:08:31 | INFO  | It takes a moment until task e75cf844-d0f1-43a0-8ef0-d89dbe586d78 (bootstrap-basic) has been started and output is visible here. 2025-06-02 01:08:35.128471 | orchestrator | 2025-06-02 01:08:35.128984 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-02 01:08:35.130673 | orchestrator | 2025-06-02 01:08:35.131647 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 01:08:35.133257 | orchestrator | Monday 02 June 2025 01:08:35 +0000 (0:00:00.071) 0:00:00.071 *********** 2025-06-02 01:08:36.893059 | orchestrator | ok: [localhost] 2025-06-02 01:08:36.893639 | orchestrator | 2025-06-02 01:08:36.894570 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-02 01:08:36.895323 | orchestrator | Monday 02 June 2025 01:08:36 +0000 (0:00:01.769) 0:00:01.840 *********** 2025-06-02 01:08:45.337578 | orchestrator | ok: [localhost] 2025-06-02 01:08:45.338182 | orchestrator | 2025-06-02 01:08:45.339199 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-02 01:08:45.340473 | orchestrator | Monday 02 June 2025 01:08:45 +0000 (0:00:08.443) 0:00:10.284 *********** 2025-06-02 01:08:52.063889 | orchestrator | changed: [localhost] 2025-06-02 01:08:52.065232 | orchestrator | 2025-06-02 01:08:52.066183 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-02 01:08:52.066450 | orchestrator | Monday 02 June 2025 01:08:52 +0000 (0:00:06.726) 0:00:17.010 *********** 2025-06-02 01:08:58.601979 | orchestrator | ok: [localhost] 2025-06-02 01:08:58.602740 | orchestrator | 2025-06-02 01:08:58.604115 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-02 01:08:58.605974 | orchestrator | Monday 02 June 2025 01:08:58 +0000 (0:00:06.536) 0:00:23.547 *********** 2025-06-02 01:09:05.884365 | orchestrator | changed: [localhost] 2025-06-02 01:09:05.884748 | orchestrator | 2025-06-02 01:09:05.885764 | orchestrator | TASK [Create public network] *************************************************** 2025-06-02 01:09:05.887258 | orchestrator | Monday 02 June 2025 01:09:05 +0000 (0:00:07.282) 0:00:30.829 *********** 2025-06-02 01:09:10.695915 | orchestrator | changed: [localhost] 2025-06-02 01:09:10.697392 | orchestrator | 2025-06-02 01:09:10.699164 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-02 01:09:10.699722 | orchestrator | Monday 02 June 2025 01:09:10 +0000 (0:00:04.811) 0:00:35.641 *********** 2025-06-02 01:09:16.516010 | orchestrator | changed: [localhost] 2025-06-02 01:09:16.516780 | orchestrator | 2025-06-02 01:09:16.518923 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-02 01:09:16.519963 | orchestrator | Monday 02 June 2025 01:09:16 +0000 (0:00:05.820) 0:00:41.461 *********** 2025-06-02 01:09:20.722377 | orchestrator | changed: [localhost] 2025-06-02 01:09:20.723249 | orchestrator | 2025-06-02 01:09:20.723678 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-02 01:09:20.724902 | orchestrator | Monday 02 June 2025 01:09:20 +0000 (0:00:04.205) 0:00:45.667 *********** 2025-06-02 01:09:24.437962 | orchestrator | changed: [localhost] 2025-06-02 01:09:24.438081 | orchestrator | 2025-06-02 01:09:24.439349 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-02 01:09:24.440745 | orchestrator | Monday 02 June 2025 01:09:24 +0000 (0:00:03.714) 0:00:49.382 *********** 2025-06-02 01:09:27.998594 | orchestrator | ok: [localhost] 2025-06-02 01:09:27.998769 | orchestrator | 2025-06-02 01:09:28.000974 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:09:28.001004 | orchestrator | 2025-06-02 01:09:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 01:09:28.001431 | orchestrator | 2025-06-02 01:09:27 | INFO  | Please wait and do not abort execution. 2025-06-02 01:09:28.003055 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 01:09:28.004261 | orchestrator | 2025-06-02 01:09:28.005466 | orchestrator | 2025-06-02 01:09:28.005957 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:09:28.006955 | orchestrator | Monday 02 June 2025 01:09:27 +0000 (0:00:03.560) 0:00:52.943 *********** 2025-06-02 01:09:28.007854 | orchestrator | =============================================================================== 2025-06-02 01:09:28.008730 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.44s 2025-06-02 01:09:28.009700 | orchestrator | Create volume type local ------------------------------------------------ 7.28s 2025-06-02 01:09:28.010216 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.73s 2025-06-02 01:09:28.011115 | orchestrator | Get volume type local --------------------------------------------------- 6.54s 2025-06-02 01:09:28.011631 | orchestrator | Set public network to default ------------------------------------------- 5.82s 2025-06-02 01:09:28.013542 | orchestrator | Create public network --------------------------------------------------- 4.81s 2025-06-02 01:09:28.014142 | orchestrator | Create public subnet ---------------------------------------------------- 4.21s 2025-06-02 01:09:28.014892 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.71s 2025-06-02 01:09:28.015573 | orchestrator | Create manager role ----------------------------------------------------- 3.56s 2025-06-02 01:09:28.016729 | orchestrator | Gathering Facts --------------------------------------------------------- 1.77s 2025-06-02 01:09:30.206964 | orchestrator | 2025-06-02 01:09:30 | INFO  | It takes a moment until task 5e0ee18c-2296-41a4-8f96-5d042d145df9 (image-manager) has been started and output is visible here. 2025-06-02 01:09:33.496272 | orchestrator | 2025-06-02 01:09:33 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-02 01:09:33.713607 | orchestrator | 2025-06-02 01:09:33 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-02 01:09:33.713699 | orchestrator | 2025-06-02 01:09:33 | INFO  | Importing image Cirros 0.6.2 2025-06-02 01:09:33.714183 | orchestrator | 2025-06-02 01:09:33 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-02 01:09:35.425377 | orchestrator | 2025-06-02 01:09:35 | INFO  | Waiting for image to leave queued state... 2025-06-02 01:09:37.465931 | orchestrator | 2025-06-02 01:09:37 | INFO  | Waiting for import to complete... 2025-06-02 01:09:47.607889 | orchestrator | 2025-06-02 01:09:47 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-02 01:09:47.987576 | orchestrator | 2025-06-02 01:09:47 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-02 01:09:47.989165 | orchestrator | 2025-06-02 01:09:47 | INFO  | Setting internal_version = 0.6.2 2025-06-02 01:09:47.989358 | orchestrator | 2025-06-02 01:09:47 | INFO  | Setting image_original_user = cirros 2025-06-02 01:09:47.990260 | orchestrator | 2025-06-02 01:09:47 | INFO  | Adding tag os:cirros 2025-06-02 01:09:48.274146 | orchestrator | 2025-06-02 01:09:48 | INFO  | Setting property architecture: x86_64 2025-06-02 01:09:48.511540 | orchestrator | 2025-06-02 01:09:48 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 01:09:48.821211 | orchestrator | 2025-06-02 01:09:48 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 01:09:49.000711 | orchestrator | 2025-06-02 01:09:48 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 01:09:49.198108 | orchestrator | 2025-06-02 01:09:49 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 01:09:49.411689 | orchestrator | 2025-06-02 01:09:49 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 01:09:49.628923 | orchestrator | 2025-06-02 01:09:49 | INFO  | Setting property os_distro: cirros 2025-06-02 01:09:49.819978 | orchestrator | 2025-06-02 01:09:49 | INFO  | Setting property replace_frequency: never 2025-06-02 01:09:50.028482 | orchestrator | 2025-06-02 01:09:50 | INFO  | Setting property uuid_validity: none 2025-06-02 01:09:50.209440 | orchestrator | 2025-06-02 01:09:50 | INFO  | Setting property provided_until: none 2025-06-02 01:09:50.389618 | orchestrator | 2025-06-02 01:09:50 | INFO  | Setting property image_description: Cirros 2025-06-02 01:09:50.590388 | orchestrator | 2025-06-02 01:09:50 | INFO  | Setting property image_name: Cirros 2025-06-02 01:09:50.803908 | orchestrator | 2025-06-02 01:09:50 | INFO  | Setting property internal_version: 0.6.2 2025-06-02 01:09:50.986769 | orchestrator | 2025-06-02 01:09:50 | INFO  | Setting property image_original_user: cirros 2025-06-02 01:09:51.191892 | orchestrator | 2025-06-02 01:09:51 | INFO  | Setting property os_version: 0.6.2 2025-06-02 01:09:51.387162 | orchestrator | 2025-06-02 01:09:51 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-02 01:09:51.592394 | orchestrator | 2025-06-02 01:09:51 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-02 01:09:51.841122 | orchestrator | 2025-06-02 01:09:51 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-02 01:09:51.841601 | orchestrator | 2025-06-02 01:09:51 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-02 01:09:51.842954 | orchestrator | 2025-06-02 01:09:51 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-02 01:09:52.047182 | orchestrator | 2025-06-02 01:09:52 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-02 01:09:52.254015 | orchestrator | 2025-06-02 01:09:52 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-02 01:09:52.254649 | orchestrator | 2025-06-02 01:09:52 | INFO  | Importing image Cirros 0.6.3 2025-06-02 01:09:52.255633 | orchestrator | 2025-06-02 01:09:52 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-02 01:09:53.293846 | orchestrator | 2025-06-02 01:09:53 | INFO  | Waiting for image to leave queued state... 2025-06-02 01:09:55.334354 | orchestrator | 2025-06-02 01:09:55 | INFO  | Waiting for import to complete... 2025-06-02 01:10:05.453857 | orchestrator | 2025-06-02 01:10:05 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-02 01:10:05.715784 | orchestrator | 2025-06-02 01:10:05 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-02 01:10:05.716719 | orchestrator | 2025-06-02 01:10:05 | INFO  | Setting internal_version = 0.6.3 2025-06-02 01:10:05.717278 | orchestrator | 2025-06-02 01:10:05 | INFO  | Setting image_original_user = cirros 2025-06-02 01:10:05.718308 | orchestrator | 2025-06-02 01:10:05 | INFO  | Adding tag os:cirros 2025-06-02 01:10:05.938464 | orchestrator | 2025-06-02 01:10:05 | INFO  | Setting property architecture: x86_64 2025-06-02 01:10:06.164619 | orchestrator | 2025-06-02 01:10:06 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 01:10:06.356467 | orchestrator | 2025-06-02 01:10:06 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 01:10:06.546226 | orchestrator | 2025-06-02 01:10:06 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 01:10:06.728006 | orchestrator | 2025-06-02 01:10:06 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 01:10:06.905660 | orchestrator | 2025-06-02 01:10:06 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 01:10:07.090116 | orchestrator | 2025-06-02 01:10:07 | INFO  | Setting property os_distro: cirros 2025-06-02 01:10:07.257920 | orchestrator | 2025-06-02 01:10:07 | INFO  | Setting property replace_frequency: never 2025-06-02 01:10:07.438450 | orchestrator | 2025-06-02 01:10:07 | INFO  | Setting property uuid_validity: none 2025-06-02 01:10:07.665972 | orchestrator | 2025-06-02 01:10:07 | INFO  | Setting property provided_until: none 2025-06-02 01:10:07.850512 | orchestrator | 2025-06-02 01:10:07 | INFO  | Setting property image_description: Cirros 2025-06-02 01:10:08.064242 | orchestrator | 2025-06-02 01:10:08 | INFO  | Setting property image_name: Cirros 2025-06-02 01:10:08.267583 | orchestrator | 2025-06-02 01:10:08 | INFO  | Setting property internal_version: 0.6.3 2025-06-02 01:10:08.436194 | orchestrator | 2025-06-02 01:10:08 | INFO  | Setting property image_original_user: cirros 2025-06-02 01:10:08.657591 | orchestrator | 2025-06-02 01:10:08 | INFO  | Setting property os_version: 0.6.3 2025-06-02 01:10:08.849058 | orchestrator | 2025-06-02 01:10:08 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-02 01:10:09.037750 | orchestrator | 2025-06-02 01:10:09 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-02 01:10:09.265723 | orchestrator | 2025-06-02 01:10:09 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-02 01:10:09.268400 | orchestrator | 2025-06-02 01:10:09 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-02 01:10:09.269624 | orchestrator | 2025-06-02 01:10:09 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-02 01:10:10.209244 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-02 01:10:12.056669 | orchestrator | 2025-06-02 01:10:12 | INFO  | date: 2025-06-01 2025-06-02 01:10:12.056775 | orchestrator | 2025-06-02 01:10:12 | INFO  | image: octavia-amphora-haproxy-2024.2.20250601.qcow2 2025-06-02 01:10:12.058972 | orchestrator | 2025-06-02 01:10:12 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2 2025-06-02 01:10:12.059024 | orchestrator | 2025-06-02 01:10:12 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2.CHECKSUM 2025-06-02 01:10:12.187466 | orchestrator | 2025-06-02 01:10:12 | INFO  | checksum: 700471d784d62fa237f40333fe5c8c65dd56f28e7d4645bd524c044147a32271 2025-06-02 01:10:12.255547 | orchestrator | 2025-06-02 01:10:12 | INFO  | It takes a moment until task 6f434afb-a546-4d72-a0b4-51fbc7d3b43d (image-manager) has been started and output is visible here. 2025-06-02 01:10:12.494218 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-02 01:10:12.494467 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-02 01:10:14.079291 | orchestrator | 2025-06-02 01:10:14 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-01' 2025-06-02 01:10:14.097281 | orchestrator | 2025-06-02 01:10:14 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2: 200 2025-06-02 01:10:14.098516 | orchestrator | 2025-06-02 01:10:14 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-01 2025-06-02 01:10:14.099620 | orchestrator | 2025-06-02 01:10:14 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2 2025-06-02 01:10:14.474425 | orchestrator | 2025-06-02 01:10:14 | INFO  | Waiting for image to leave queued state... 2025-06-02 01:10:16.521277 | orchestrator | 2025-06-02 01:10:16 | INFO  | Waiting for import to complete... 2025-06-02 01:10:26.600979 | orchestrator | 2025-06-02 01:10:26 | INFO  | Waiting for import to complete... 2025-06-02 01:10:36.690354 | orchestrator | 2025-06-02 01:10:36 | INFO  | Waiting for import to complete... 2025-06-02 01:10:46.774195 | orchestrator | 2025-06-02 01:10:46 | INFO  | Waiting for import to complete... 2025-06-02 01:10:56.867702 | orchestrator | 2025-06-02 01:10:56 | INFO  | Waiting for import to complete... 2025-06-02 01:11:07.184441 | orchestrator | 2025-06-02 01:11:07 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-01' successfully completed, reloading images 2025-06-02 01:11:07.530981 | orchestrator | 2025-06-02 01:11:07 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-01' 2025-06-02 01:11:07.531294 | orchestrator | 2025-06-02 01:11:07 | INFO  | Setting internal_version = 2025-06-01 2025-06-02 01:11:07.532323 | orchestrator | 2025-06-02 01:11:07 | INFO  | Setting image_original_user = ubuntu 2025-06-02 01:11:07.532956 | orchestrator | 2025-06-02 01:11:07 | INFO  | Adding tag amphora 2025-06-02 01:11:07.740197 | orchestrator | 2025-06-02 01:11:07 | INFO  | Adding tag os:ubuntu 2025-06-02 01:11:07.973882 | orchestrator | 2025-06-02 01:11:07 | INFO  | Setting property architecture: x86_64 2025-06-02 01:11:08.174538 | orchestrator | 2025-06-02 01:11:08 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 01:11:08.365519 | orchestrator | 2025-06-02 01:11:08 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 01:11:08.574636 | orchestrator | 2025-06-02 01:11:08 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 01:11:08.739050 | orchestrator | 2025-06-02 01:11:08 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 01:11:08.944146 | orchestrator | 2025-06-02 01:11:08 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 01:11:09.158704 | orchestrator | 2025-06-02 01:11:09 | INFO  | Setting property os_distro: ubuntu 2025-06-02 01:11:09.351541 | orchestrator | 2025-06-02 01:11:09 | INFO  | Setting property replace_frequency: quarterly 2025-06-02 01:11:09.570258 | orchestrator | 2025-06-02 01:11:09 | INFO  | Setting property uuid_validity: last-1 2025-06-02 01:11:09.799134 | orchestrator | 2025-06-02 01:11:09 | INFO  | Setting property provided_until: none 2025-06-02 01:11:09.979929 | orchestrator | 2025-06-02 01:11:09 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-02 01:11:10.175320 | orchestrator | 2025-06-02 01:11:10 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-02 01:11:10.581990 | orchestrator | 2025-06-02 01:11:10 | INFO  | Setting property internal_version: 2025-06-01 2025-06-02 01:11:10.780625 | orchestrator | 2025-06-02 01:11:10 | INFO  | Setting property image_original_user: ubuntu 2025-06-02 01:11:11.020273 | orchestrator | 2025-06-02 01:11:11 | INFO  | Setting property os_version: 2025-06-01 2025-06-02 01:11:11.239895 | orchestrator | 2025-06-02 01:11:11 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2 2025-06-02 01:11:11.459619 | orchestrator | 2025-06-02 01:11:11 | INFO  | Setting property image_build_date: 2025-06-01 2025-06-02 01:11:11.666717 | orchestrator | 2025-06-02 01:11:11 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-01' 2025-06-02 01:11:11.666926 | orchestrator | 2025-06-02 01:11:11 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-01' 2025-06-02 01:11:11.844585 | orchestrator | 2025-06-02 01:11:11 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-02 01:11:11.846342 | orchestrator | 2025-06-02 01:11:11 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-02 01:11:11.846943 | orchestrator | 2025-06-02 01:11:11 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-02 01:11:11.847938 | orchestrator | 2025-06-02 01:11:11 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-02 01:11:12.279528 | orchestrator | ok: Runtime: 0:02:52.312912 2025-06-02 01:11:12.345390 | 2025-06-02 01:11:12.345548 | TASK [Run checks] 2025-06-02 01:11:13.036201 | orchestrator | + set -e 2025-06-02 01:11:13.036389 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 01:11:13.036445 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 01:11:13.036467 | orchestrator | ++ INTERACTIVE=false 2025-06-02 01:11:13.036480 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 01:11:13.036493 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 01:11:13.036507 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 01:11:13.037343 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 01:11:13.043251 | orchestrator | 2025-06-02 01:11:13.043305 | orchestrator | # CHECK 2025-06-02 01:11:13.043320 | orchestrator | 2025-06-02 01:11:13.043333 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 01:11:13.043348 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 01:11:13.043360 | orchestrator | + echo 2025-06-02 01:11:13.043371 | orchestrator | + echo '# CHECK' 2025-06-02 01:11:13.043382 | orchestrator | + echo 2025-06-02 01:11:13.043396 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 01:11:13.044522 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 01:11:13.109446 | orchestrator | 2025-06-02 01:11:13.109537 | orchestrator | ## Containers @ testbed-manager 2025-06-02 01:11:13.109551 | orchestrator | 2025-06-02 01:11:13.109565 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 01:11:13.109576 | orchestrator | + echo 2025-06-02 01:11:13.109587 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-02 01:11:13.109599 | orchestrator | + echo 2025-06-02 01:11:13.109611 | orchestrator | + osism container testbed-manager ps 2025-06-02 01:11:15.195614 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 01:11:15.195749 | orchestrator | 30c942e44369 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-06-02 01:11:15.195776 | orchestrator | ef5966a1f87e registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2025-06-02 01:11:15.195831 | orchestrator | b20fa4854222 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-02 01:11:15.195845 | orchestrator | bdbd471b4460 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-02 01:11:15.195856 | orchestrator | fac8d49f6b88 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2025-06-02 01:11:15.195869 | orchestrator | e4559a32316c registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 16 minutes ago Up 15 minutes cephclient 2025-06-02 01:11:15.195884 | orchestrator | d2b42dc3dd37 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2025-06-02 01:11:15.195897 | orchestrator | f2ff629dc973 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2025-06-02 01:11:15.195908 | orchestrator | 1e8db0c5714c registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2025-06-02 01:11:15.195945 | orchestrator | e101a7c6a2dd phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 28 minutes ago Up 27 minutes (healthy) 80/tcp phpmyadmin 2025-06-02 01:11:15.195957 | orchestrator | b6a93cd5d7ca registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 29 minutes ago Up 28 minutes openstackclient 2025-06-02 01:11:15.195969 | orchestrator | bf3b29839fa4 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 29 minutes ago Up 28 minutes (healthy) 8080/tcp homer 2025-06-02 01:11:15.195980 | orchestrator | 95cb4a1f4ffc registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 48 minutes ago Up 47 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-02 01:11:15.195997 | orchestrator | a6e9b69e4bef registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" 51 minutes ago Up 50 minutes (healthy) manager-inventory_reconciler-1 2025-06-02 01:11:15.196030 | orchestrator | 3936290ffd73 registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" 51 minutes ago Up 51 minutes (healthy) osism-kubernetes 2025-06-02 01:11:15.196042 | orchestrator | facb0d0ef153 registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" 51 minutes ago Up 51 minutes (healthy) ceph-ansible 2025-06-02 01:11:15.196054 | orchestrator | 4b75c89740d8 registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" 51 minutes ago Up 51 minutes (healthy) osism-ansible 2025-06-02 01:11:15.196065 | orchestrator | ea2f3a37ccfc registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" 51 minutes ago Up 51 minutes (healthy) kolla-ansible 2025-06-02 01:11:15.196076 | orchestrator | 777a79c6b13d registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 51 minutes ago Up 51 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-02 01:11:15.196088 | orchestrator | dca6a8e27fda registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 51 minutes ago Up 51 minutes (healthy) manager-openstack-1 2025-06-02 01:11:15.196099 | orchestrator | 6d1870a9d3b3 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 51 minutes ago Up 51 minutes (healthy) 6379/tcp manager-redis-1 2025-06-02 01:11:15.196111 | orchestrator | e9a3e1204149 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 51 minutes ago Up 51 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-02 01:11:15.196122 | orchestrator | 74cbde8a5e46 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 51 minutes ago Up 51 minutes (healthy) manager-flower-1 2025-06-02 01:11:15.196142 | orchestrator | e26611e64473 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 51 minutes ago Up 51 minutes (healthy) manager-listener-1 2025-06-02 01:11:15.196153 | orchestrator | bf9469da2a57 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 51 minutes ago Up 51 minutes (healthy) manager-watchdog-1 2025-06-02 01:11:15.196165 | orchestrator | 2f075aabf92c registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" 51 minutes ago Up 51 minutes (healthy) osismclient 2025-06-02 01:11:15.196176 | orchestrator | da526980754f registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 51 minutes ago Up 51 minutes (healthy) manager-beat-1 2025-06-02 01:11:15.196187 | orchestrator | 67af366029f1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 51 minutes ago Up 51 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-02 01:11:15.196198 | orchestrator | 2d3c3a0a8cfb registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 53 minutes ago Up 53 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-02 01:11:15.421387 | orchestrator | 2025-06-02 01:11:15.421476 | orchestrator | ## Images @ testbed-manager 2025-06-02 01:11:15.421489 | orchestrator | 2025-06-02 01:11:15.421499 | orchestrator | + echo 2025-06-02 01:11:15.421508 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-02 01:11:15.421518 | orchestrator | + echo 2025-06-02 01:11:15.421527 | orchestrator | + osism container testbed-manager images 2025-06-02 01:11:17.448520 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 01:11:17.449558 | orchestrator | registry.osism.tech/osism/homer v25.05.2 322317afcf13 22 hours ago 11.5MB 2025-06-02 01:11:17.449593 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 f2fe5144a396 22 hours ago 225MB 2025-06-02 01:11:17.449629 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250530.0 73cd5a0acb2a 28 hours ago 574MB 2025-06-02 01:11:17.449641 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250531.0 eb6fb0ff8e52 28 hours ago 578MB 2025-06-02 01:11:17.449652 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 01:11:17.449688 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 01:11:17.449700 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 01:11:17.449711 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250530 48bb7d2c6b08 2 days ago 892MB 2025-06-02 01:11:17.449722 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250530 3d4c4d6fe7fa 2 days ago 361MB 2025-06-02 01:11:17.449732 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 01:11:17.449743 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 01:11:17.449773 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250530 0e447338580d 2 days ago 457MB 2025-06-02 01:11:17.449785 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250530.0 bce894afc91f 2 days ago 538MB 2025-06-02 01:11:17.449819 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250530.0 467731c31786 2 days ago 1.21GB 2025-06-02 01:11:17.449830 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250530.0 1b4e0cdc5cdd 2 days ago 308MB 2025-06-02 01:11:17.449840 | orchestrator | registry.osism.tech/osism/osism 0.20250530.0 bce098659f68 2 days ago 297MB 2025-06-02 01:11:17.449851 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 3 days ago 41.4MB 2025-06-02 01:11:17.449861 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 5 days ago 224MB 2025-06-02 01:11:17.449871 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 3 weeks ago 453MB 2025-06-02 01:11:17.449882 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 3 months ago 328MB 2025-06-02 01:11:17.449892 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-02 01:11:17.449903 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-02 01:11:17.449913 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-06-02 01:11:17.683307 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 01:11:17.683663 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 01:11:17.736928 | orchestrator | 2025-06-02 01:11:17.737026 | orchestrator | ## Containers @ testbed-node-0 2025-06-02 01:11:17.737041 | orchestrator | 2025-06-02 01:11:17.737053 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 01:11:17.737064 | orchestrator | + echo 2025-06-02 01:11:17.737076 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-02 01:11:17.737087 | orchestrator | + echo 2025-06-02 01:11:17.737099 | orchestrator | + osism container testbed-node-0 ps 2025-06-02 01:11:19.834882 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 01:11:19.835016 | orchestrator | 26be938d552b registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 01:11:19.835037 | orchestrator | ebec42accace registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 01:11:19.835049 | orchestrator | 35d56e06dcb2 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 01:11:19.835060 | orchestrator | 3dbf3fe99294 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 01:11:19.835071 | orchestrator | 3ba2d455fa99 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-02 01:11:19.835102 | orchestrator | 286114e5b654 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 01:11:19.835114 | orchestrator | d1770d327bc5 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-02 01:11:19.835146 | orchestrator | f1587d802844 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-02 01:11:19.835157 | orchestrator | 99e197a958d5 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) placement_api 2025-06-02 01:11:19.835168 | orchestrator | fef25a238be0 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 01:11:19.835179 | orchestrator | 753ba1192934 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 01:11:19.835190 | orchestrator | 816a10cb080f registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-02 01:11:19.835200 | orchestrator | 98eb014e224d registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-02 01:11:19.835211 | orchestrator | 3e84d8310a9d registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-02 01:11:19.835222 | orchestrator | 9ac94514ce1d registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2025-06-02 01:11:19.835233 | orchestrator | 808ad15b14c5 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-02 01:11:19.835243 | orchestrator | 2b1685a95ff7 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-06-02 01:11:19.835254 | orchestrator | fc035f2cd7b1 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-06-02 01:11:19.835265 | orchestrator | 17d6add62965 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) barbican_worker 2025-06-02 01:11:19.835298 | orchestrator | 03137f2deb88 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) barbican_keystone_listener 2025-06-02 01:11:19.835310 | orchestrator | 6f2aadfb9abd registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-02 01:11:19.835320 | orchestrator | 8961bad8dd12 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-02 01:11:19.835331 | orchestrator | cf8bda2d6989 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-02 01:11:19.835342 | orchestrator | 499d42cda3ed registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-02 01:11:19.835361 | orchestrator | 6bfa6e041863 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-02 01:11:19.835379 | orchestrator | ac8cbf9449a4 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-02 01:11:19.835390 | orchestrator | c2fb52578074 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-02 01:11:19.835401 | orchestrator | 754e56a42009 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-02 01:11:19.835418 | orchestrator | 8c6b9201b06b registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-02 01:11:19.835434 | orchestrator | 61f53b8d18c2 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-06-02 01:11:19.835445 | orchestrator | fe5d8b4f49e7 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-02 01:11:19.835456 | orchestrator | 3a5846066e98 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 14 minutes ago Up 14 minutes ceph-mgr-testbed-node-0 2025-06-02 01:11:19.835466 | orchestrator | 16fb0463b49c registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-06-02 01:11:19.835477 | orchestrator | 96d38e36f99b registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-06-02 01:11:19.835488 | orchestrator | 3b727955da52 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2025-06-02 01:11:19.835503 | orchestrator | 4cc1f2c0fc7c registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (unhealthy) horizon 2025-06-02 01:11:19.835515 | orchestrator | 0e6db58bcdbc registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 18 minutes ago Up 18 minutes (healthy) mariadb 2025-06-02 01:11:19.835526 | orchestrator | e176ddf46442 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2025-06-02 01:11:19.835537 | orchestrator | 7ef7c3c6dcbb registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2025-06-02 01:11:19.835547 | orchestrator | 5446f3dbc523 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2025-06-02 01:11:19.835567 | orchestrator | a99c2edde2eb registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2025-06-02 01:11:19.835578 | orchestrator | eb7e520b4176 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2025-06-02 01:11:19.835589 | orchestrator | 1e560e1e47cd registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2025-06-02 01:11:19.835606 | orchestrator | 96dd7a38f572 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2025-06-02 01:11:19.835617 | orchestrator | 1be63f6eddbe registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2025-06-02 01:11:19.835628 | orchestrator | b99ecfd2fdb7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-0 2025-06-02 01:11:19.835639 | orchestrator | a56ab8902577 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-06-02 01:11:19.835650 | orchestrator | 5d60c30cb387 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2025-06-02 01:11:19.835661 | orchestrator | c75f6f0e56b2 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_db 2025-06-02 01:11:19.835671 | orchestrator | 2a901fc8a0e2 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis_sentinel 2025-06-02 01:11:19.835682 | orchestrator | 47fdec140f63 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis 2025-06-02 01:11:19.835693 | orchestrator | 4b8471f6ec77 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-06-02 01:11:19.835704 | orchestrator | 630f78ecddb6 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2025-06-02 01:11:19.835714 | orchestrator | 08064e09083f registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-06-02 01:11:19.835725 | orchestrator | 0188dc7a5d9c registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2025-06-02 01:11:20.069292 | orchestrator | 2025-06-02 01:11:20.069399 | orchestrator | ## Images @ testbed-node-0 2025-06-02 01:11:20.069415 | orchestrator | 2025-06-02 01:11:20.069428 | orchestrator | + echo 2025-06-02 01:11:20.069443 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-02 01:11:20.069458 | orchestrator | + echo 2025-06-02 01:11:20.069470 | orchestrator | + osism container testbed-node-0 images 2025-06-02 01:11:22.147017 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 01:11:22.147130 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-02 01:11:22.147154 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 01:11:22.147174 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-02 01:11:22.147193 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-02 01:11:22.147214 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-02 01:11:22.147233 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-02 01:11:22.147279 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 01:11:22.147291 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-02 01:11:22.147302 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-02 01:11:22.147313 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 01:11:22.147341 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-02 01:11:22.147353 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-02 01:11:22.147363 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-02 01:11:22.147374 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-02 01:11:22.147385 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 01:11:22.147396 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-02 01:11:22.147407 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 01:11:22.147418 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-02 01:11:22.147428 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-02 01:11:22.147439 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-02 01:11:22.147449 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-02 01:11:22.147460 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-02 01:11:22.147470 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-02 01:11:22.147481 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-02 01:11:22.147492 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-02 01:11:22.147502 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250530 ec3349a6437e 2 days ago 1.04GB 2025-06-02 01:11:22.147513 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250530 726d5cfde6f9 2 days ago 1.04GB 2025-06-02 01:11:22.147524 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250530 c2f966fc60ed 2 days ago 1.04GB 2025-06-02 01:11:22.147536 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250530 7c85bdb64788 2 days ago 1.04GB 2025-06-02 01:11:22.147549 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-02 01:11:22.147561 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-02 01:11:22.147594 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-02 01:11:22.147608 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-02 01:11:22.147630 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-02 01:11:22.147643 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-02 01:11:22.147662 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-02 01:11:22.147675 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-02 01:11:22.147688 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-02 01:11:22.147700 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-02 01:11:22.147713 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-02 01:11:22.147725 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-02 01:11:22.147738 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-02 01:11:22.147751 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-02 01:11:22.147764 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-02 01:11:22.147776 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250530 aa9066568160 2 days ago 1.04GB 2025-06-02 01:11:22.147813 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250530 546dea2f2472 2 days ago 1.04GB 2025-06-02 01:11:22.147827 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-02 01:11:22.147840 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-02 01:11:22.147853 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-02 01:11:22.147865 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-02 01:11:22.147878 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-02 01:11:22.147891 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-02 01:11:22.147901 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-02 01:11:22.147912 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-02 01:11:22.147923 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-02 01:11:22.147933 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-02 01:11:22.147944 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250530 df0a04869ff0 2 days ago 1.11GB 2025-06-02 01:11:22.147955 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250530 e1b2b0cc8e5c 2 days ago 1.12GB 2025-06-02 01:11:22.147965 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-02 01:11:22.147983 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-02 01:11:22.147993 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-02 01:11:22.148004 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-02 01:11:22.148015 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-02 01:11:22.366142 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 01:11:22.366236 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 01:11:22.416025 | orchestrator | 2025-06-02 01:11:22.416110 | orchestrator | ## Containers @ testbed-node-1 2025-06-02 01:11:22.416124 | orchestrator | 2025-06-02 01:11:22.416135 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 01:11:22.416147 | orchestrator | + echo 2025-06-02 01:11:22.416158 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-02 01:11:22.416170 | orchestrator | + echo 2025-06-02 01:11:22.416181 | orchestrator | + osism container testbed-node-1 ps 2025-06-02 01:11:24.485039 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 01:11:24.485177 | orchestrator | 32e4c59f0aae registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 01:11:24.485193 | orchestrator | ef068c974c5d registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 01:11:24.485205 | orchestrator | 4bf93978409a registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 01:11:24.485217 | orchestrator | 36bb70e5a732 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 01:11:24.485228 | orchestrator | db52bcd7e7dd registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-02 01:11:24.485239 | orchestrator | acb01d9637ce registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-06-02 01:11:24.485250 | orchestrator | f5dba95760ab registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 01:11:24.485261 | orchestrator | 42ac07277ceb registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-02 01:11:24.485272 | orchestrator | 14d3d4f64a37 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) placement_api 2025-06-02 01:11:24.485283 | orchestrator | 7dd20d8faecf registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 01:11:24.485294 | orchestrator | 99973f1c876c registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 01:11:24.485305 | orchestrator | b6f387fe4c87 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-02 01:11:24.485344 | orchestrator | 3446bdf2ed83 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-02 01:11:24.485356 | orchestrator | 951bd887f993 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-02 01:11:24.485367 | orchestrator | 641c8671ed03 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-02 01:11:24.485378 | orchestrator | 0af8b78d0f35 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2025-06-02 01:11:24.485389 | orchestrator | 50d9d9768a6e registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-06-02 01:11:24.485418 | orchestrator | 12a95f396af7 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-06-02 01:11:24.485430 | orchestrator | ce690cc5745f registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) barbican_worker 2025-06-02 01:11:24.485462 | orchestrator | e24034a19a09 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) barbican_keystone_listener 2025-06-02 01:11:24.485475 | orchestrator | 5d2dd9229aae registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-02 01:11:24.485486 | orchestrator | 25db2b33576f registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-02 01:11:24.485497 | orchestrator | 1161d12e6199 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-02 01:11:24.485511 | orchestrator | b9dcd0d9301e registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 13 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-02 01:11:24.485531 | orchestrator | c1ea8335d75c registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-02 01:11:24.485550 | orchestrator | b8582b137ed1 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-02 01:11:24.485569 | orchestrator | 4ca8070a7f94 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-02 01:11:24.485586 | orchestrator | ef95f46d0a98 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-02 01:11:24.485605 | orchestrator | 062349b01e97 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-06-02 01:11:24.485625 | orchestrator | 6576bdaa8e55 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-02 01:11:24.485656 | orchestrator | 2418e95391f5 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-02 01:11:24.485677 | orchestrator | 5cb5c30cd00e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 14 minutes ago Up 14 minutes ceph-mgr-testbed-node-1 2025-06-02 01:11:24.485697 | orchestrator | 06d159eeaa7c registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-06-02 01:11:24.485711 | orchestrator | cdc215a438f0 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (unhealthy) horizon 2025-06-02 01:11:24.485723 | orchestrator | d7d49acb4977 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-06-02 01:11:24.485736 | orchestrator | 1096ec973d01 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2025-06-02 01:11:24.485749 | orchestrator | 273e59b1006a registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-06-02 01:11:24.485762 | orchestrator | 046e88d36679 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2025-06-02 01:11:24.485774 | orchestrator | 8fcfef4379b7 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2025-06-02 01:11:24.485812 | orchestrator | ca7dc4b948a5 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2025-06-02 01:11:24.485846 | orchestrator | 9bf80275203e registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2025-06-02 01:11:24.485860 | orchestrator | 758aec793f94 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 24 minutes ago Up 23 minutes ovn_northd 2025-06-02 01:11:24.485873 | orchestrator | 820cb2f595fd registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2025-06-02 01:11:24.485885 | orchestrator | c0048f96fd1c registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2025-06-02 01:11:24.485899 | orchestrator | 8927c59c3b6e registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2025-06-02 01:11:24.485912 | orchestrator | 89099001b56c registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2025-06-02 01:11:24.485925 | orchestrator | 68b4fe0b4c07 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-1 2025-06-02 01:11:24.485935 | orchestrator | 7023674d9975 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2025-06-02 01:11:24.485953 | orchestrator | 57b7343676dc registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_db 2025-06-02 01:11:24.485964 | orchestrator | f8c9eb6a9681 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis_sentinel 2025-06-02 01:11:24.485975 | orchestrator | 6f1a491d2efc registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-06-02 01:11:24.485986 | orchestrator | 2cf0a35e0056 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-06-02 01:11:24.485997 | orchestrator | ddc7dba825e7 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2025-06-02 01:11:24.486008 | orchestrator | 8601e0fe7a81 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2025-06-02 01:11:24.486118 | orchestrator | 07dc15ffc57a registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2025-06-02 01:11:24.727870 | orchestrator | 2025-06-02 01:11:24.727971 | orchestrator | ## Images @ testbed-node-1 2025-06-02 01:11:24.727988 | orchestrator | 2025-06-02 01:11:24.728000 | orchestrator | + echo 2025-06-02 01:11:24.728012 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-02 01:11:24.728024 | orchestrator | + echo 2025-06-02 01:11:24.728036 | orchestrator | + osism container testbed-node-1 images 2025-06-02 01:11:26.769390 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 01:11:26.769496 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-02 01:11:26.769511 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 01:11:26.769524 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-02 01:11:26.769535 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-02 01:11:26.769550 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 01:11:26.769568 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-02 01:11:26.769587 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-02 01:11:26.769604 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 01:11:26.769623 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-02 01:11:26.769642 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-02 01:11:26.769660 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-02 01:11:26.769676 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-02 01:11:26.769687 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 01:11:26.769699 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-02 01:11:26.769734 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 01:11:26.769764 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-02 01:11:26.769775 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-02 01:11:26.769786 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-02 01:11:26.769869 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-02 01:11:26.769880 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-02 01:11:26.769891 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-02 01:11:26.769901 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-02 01:11:26.769911 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-02 01:11:26.769922 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-02 01:11:26.769932 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-02 01:11:26.769943 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-02 01:11:26.769953 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-02 01:11:26.769964 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-02 01:11:26.769974 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-02 01:11:26.769985 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-02 01:11:26.769995 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-02 01:11:26.770076 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-02 01:11:26.770090 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-02 01:11:26.770101 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-02 01:11:26.770111 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-02 01:11:26.770122 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-02 01:11:26.770133 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-02 01:11:26.770180 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-02 01:11:26.770192 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-02 01:11:26.770202 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-02 01:11:26.770224 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-02 01:11:26.770235 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-02 01:11:26.770246 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-02 01:11:26.770257 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-02 01:11:26.770267 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-02 01:11:26.770279 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-02 01:11:26.770290 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-02 01:11:26.770303 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-02 01:11:26.770322 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-02 01:11:26.770338 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-02 01:11:26.770355 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-02 01:11:26.770375 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-02 01:11:26.770394 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-02 01:11:27.000032 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 01:11:27.001783 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 01:11:27.047728 | orchestrator | 2025-06-02 01:11:27.047831 | orchestrator | ## Containers @ testbed-node-2 2025-06-02 01:11:27.047845 | orchestrator | 2025-06-02 01:11:27.047856 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 01:11:27.047866 | orchestrator | + echo 2025-06-02 01:11:27.047878 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-02 01:11:27.047890 | orchestrator | + echo 2025-06-02 01:11:27.047901 | orchestrator | + osism container testbed-node-2 ps 2025-06-02 01:11:29.198293 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 01:11:29.198402 | orchestrator | f51a57b21a7f registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 01:11:29.198417 | orchestrator | e84f986644cc registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 01:11:29.198449 | orchestrator | c159c85c3c16 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 01:11:29.198466 | orchestrator | a9e76a8917ba registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 01:11:29.198478 | orchestrator | b165620f86c4 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-02 01:11:29.198489 | orchestrator | 8582898cad9e registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-06-02 01:11:29.198519 | orchestrator | ecfecccd747a registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 01:11:29.198531 | orchestrator | b3aad3da1351 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-02 01:11:29.198542 | orchestrator | edfe6c5b915d registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) placement_api 2025-06-02 01:11:29.198553 | orchestrator | 9e4981dd6ae1 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 01:11:29.198564 | orchestrator | a6d75f5b4fc4 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 01:11:29.198575 | orchestrator | f7b14e3ca448 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-02 01:11:29.198585 | orchestrator | ec9dcb5aad03 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-02 01:11:29.198596 | orchestrator | e2f8615c604a registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-02 01:11:29.198607 | orchestrator | 0e676fe1483f registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-02 01:11:29.198618 | orchestrator | 10cbfdd134be registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2025-06-02 01:11:29.198629 | orchestrator | 8399ce94ab42 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-06-02 01:11:29.198640 | orchestrator | d79dc0341caa registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-06-02 01:11:29.198651 | orchestrator | a801b0cf3f7e registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) barbican_worker 2025-06-02 01:11:29.198680 | orchestrator | 14a8a784dd03 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-02 01:11:29.198692 | orchestrator | 16091fd4d958 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-02 01:11:29.198703 | orchestrator | 9a97e294586d registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-02 01:11:29.198715 | orchestrator | ab102dfa9092 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-02 01:11:29.198778 | orchestrator | 0ad362e82503 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-06-02 01:11:29.198888 | orchestrator | e303d92d1e89 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-02 01:11:29.198906 | orchestrator | 1d0a0a431867 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-02 01:11:29.198919 | orchestrator | cc3a76ae932a registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-02 01:11:29.198933 | orchestrator | 050750a9695a registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-02 01:11:29.198947 | orchestrator | 84c707c4254a registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-06-02 01:11:29.198961 | orchestrator | 980a1c73a8a2 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-02 01:11:29.198974 | orchestrator | 124e20ee0c40 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-02 01:11:29.198988 | orchestrator | bd84d40f7591 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 14 minutes ago Up 14 minutes ceph-mgr-testbed-node-2 2025-06-02 01:11:29.199001 | orchestrator | b73cf35f1b91 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-06-02 01:11:29.199015 | orchestrator | 037e074d030d registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (unhealthy) horizon 2025-06-02 01:11:29.199036 | orchestrator | 4bbedb446355 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-06-02 01:11:29.199049 | orchestrator | 2d97a917f891 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2025-06-02 01:11:29.199063 | orchestrator | 3d9ab371635b registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-06-02 01:11:29.199075 | orchestrator | ad60c9ca825b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2025-06-02 01:11:29.199089 | orchestrator | cc7eb40b1c21 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2025-06-02 01:11:29.199101 | orchestrator | d45db777aeed registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2025-06-02 01:11:29.199124 | orchestrator | 3d38b31081ce registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2025-06-02 01:11:29.199232 | orchestrator | 3434757ea39e registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2025-06-02 01:11:29.199254 | orchestrator | e908c3e98100 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2025-06-02 01:11:29.199265 | orchestrator | 18c71a9b5532 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2025-06-02 01:11:29.199281 | orchestrator | 90647f306e2a registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2025-06-02 01:11:29.199292 | orchestrator | 73fe518cc47d registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2025-06-02 01:11:29.199303 | orchestrator | 325cc7d2ea5e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-2 2025-06-02 01:11:29.199313 | orchestrator | 25625aa8c657 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2025-06-02 01:11:29.199324 | orchestrator | 603f1f113121 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_db 2025-06-02 01:11:29.199335 | orchestrator | 756dabe2f92c registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) redis_sentinel 2025-06-02 01:11:29.199346 | orchestrator | 0c8143c52f56 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-06-02 01:11:29.199357 | orchestrator | 6f0e2bae3821 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-06-02 01:11:29.199367 | orchestrator | 3e290d068bf6 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2025-06-02 01:11:29.199378 | orchestrator | 5a03df45d698 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2025-06-02 01:11:29.199389 | orchestrator | e0e4c07911b1 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2025-06-02 01:11:29.482311 | orchestrator | 2025-06-02 01:11:29.482401 | orchestrator | ## Images @ testbed-node-2 2025-06-02 01:11:29.482416 | orchestrator | 2025-06-02 01:11:29.482428 | orchestrator | + echo 2025-06-02 01:11:29.482440 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-02 01:11:29.482452 | orchestrator | + echo 2025-06-02 01:11:29.482463 | orchestrator | + osism container testbed-node-2 images 2025-06-02 01:11:31.605425 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 01:11:31.605526 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-02 01:11:31.605540 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 01:11:31.605553 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-02 01:11:31.605564 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-02 01:11:31.605575 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 01:11:31.605608 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-02 01:11:31.605620 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-02 01:11:31.605630 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 01:11:31.605641 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-02 01:11:31.605651 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-02 01:11:31.605662 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-02 01:11:31.605672 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-02 01:11:31.605683 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 01:11:31.605693 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-02 01:11:31.605704 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 01:11:31.605714 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-02 01:11:31.605725 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-02 01:11:31.605735 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-02 01:11:31.605746 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-02 01:11:31.605756 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-02 01:11:31.605766 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-02 01:11:31.605777 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-02 01:11:31.605814 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-02 01:11:31.605840 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-02 01:11:31.605852 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-02 01:11:31.605862 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-02 01:11:31.605873 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-02 01:11:31.605883 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-02 01:11:31.605894 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-02 01:11:31.605904 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-02 01:11:31.605915 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-02 01:11:31.605950 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-02 01:11:31.605962 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-02 01:11:31.605973 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-02 01:11:31.605983 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-02 01:11:31.605994 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-02 01:11:31.606006 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-02 01:11:31.606071 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-02 01:11:31.606084 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-02 01:11:31.606098 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-02 01:11:31.606112 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-02 01:11:31.606124 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-02 01:11:31.606137 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-02 01:11:31.606150 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-02 01:11:31.606163 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-02 01:11:31.606175 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-02 01:11:31.606187 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-02 01:11:31.606206 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-02 01:11:31.606219 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-02 01:11:31.606232 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-02 01:11:31.606244 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-02 01:11:31.606257 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-02 01:11:31.606270 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-02 01:11:31.835076 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-02 01:11:31.844520 | orchestrator | + set -e 2025-06-02 01:11:31.844573 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 01:11:31.845439 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 01:11:31.845457 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 01:11:31.845465 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 01:11:31.845474 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 01:11:31.845483 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 01:11:31.845492 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 01:11:31.845517 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 01:11:31.845526 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 01:11:31.845533 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 01:11:31.845562 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 01:11:31.845569 | orchestrator | ++ export ARA=false 2025-06-02 01:11:31.845576 | orchestrator | ++ ARA=false 2025-06-02 01:11:31.845583 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 01:11:31.845591 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 01:11:31.845598 | orchestrator | ++ export TEMPEST=false 2025-06-02 01:11:31.845605 | orchestrator | ++ TEMPEST=false 2025-06-02 01:11:31.845612 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 01:11:31.845619 | orchestrator | ++ IS_ZUUL=true 2025-06-02 01:11:31.845626 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2025-06-02 01:11:31.845634 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2025-06-02 01:11:31.845732 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 01:11:31.845744 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 01:11:31.845751 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 01:11:31.845758 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 01:11:31.845765 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 01:11:31.845772 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 01:11:31.845779 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 01:11:31.845786 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 01:11:31.845816 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 01:11:31.845824 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-02 01:11:31.853153 | orchestrator | + set -e 2025-06-02 01:11:31.853195 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 01:11:31.853205 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 01:11:31.853215 | orchestrator | ++ INTERACTIVE=false 2025-06-02 01:11:31.853224 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 01:11:31.853232 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 01:11:31.853242 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 01:11:31.853945 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 01:11:31.859756 | orchestrator | 2025-06-02 01:11:31.859785 | orchestrator | # Ceph status 2025-06-02 01:11:31.859820 | orchestrator | 2025-06-02 01:11:31.859831 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 01:11:31.859843 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 01:11:31.859853 | orchestrator | + echo 2025-06-02 01:11:31.859864 | orchestrator | + echo '# Ceph status' 2025-06-02 01:11:31.859875 | orchestrator | + echo 2025-06-02 01:11:31.859886 | orchestrator | + ceph -s 2025-06-02 01:11:32.413205 | orchestrator | cluster: 2025-06-02 01:11:32.413320 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-02 01:11:32.413337 | orchestrator | health: HEALTH_OK 2025-06-02 01:11:32.413350 | orchestrator | 2025-06-02 01:11:32.413361 | orchestrator | services: 2025-06-02 01:11:32.413373 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 25m) 2025-06-02 01:11:32.413386 | orchestrator | mgr: testbed-node-0(active, since 14m), standbys: testbed-node-1, testbed-node-2 2025-06-02 01:11:32.413398 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-02 01:11:32.413410 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 22m) 2025-06-02 01:11:32.413421 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-02 01:11:32.413432 | orchestrator | 2025-06-02 01:11:32.413443 | orchestrator | data: 2025-06-02 01:11:32.413454 | orchestrator | volumes: 1/1 healthy 2025-06-02 01:11:32.413465 | orchestrator | pools: 14 pools, 417 pgs 2025-06-02 01:11:32.413476 | orchestrator | objects: 556 objects, 2.2 GiB 2025-06-02 01:11:32.413487 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-02 01:11:32.413499 | orchestrator | pgs: 417 active+clean 2025-06-02 01:11:32.413509 | orchestrator | 2025-06-02 01:11:32.462961 | orchestrator | 2025-06-02 01:11:32.463043 | orchestrator | # Ceph versions 2025-06-02 01:11:32.463057 | orchestrator | 2025-06-02 01:11:32.463068 | orchestrator | + echo 2025-06-02 01:11:32.463079 | orchestrator | + echo '# Ceph versions' 2025-06-02 01:11:32.463091 | orchestrator | + echo 2025-06-02 01:11:32.463102 | orchestrator | + ceph versions 2025-06-02 01:11:33.060970 | orchestrator | { 2025-06-02 01:11:33.061070 | orchestrator | "mon": { 2025-06-02 01:11:33.061085 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 01:11:33.061097 | orchestrator | }, 2025-06-02 01:11:33.061109 | orchestrator | "mgr": { 2025-06-02 01:11:33.061120 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 01:11:33.061131 | orchestrator | }, 2025-06-02 01:11:33.061142 | orchestrator | "osd": { 2025-06-02 01:11:33.061152 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-02 01:11:33.061193 | orchestrator | }, 2025-06-02 01:11:33.061205 | orchestrator | "mds": { 2025-06-02 01:11:33.061216 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 01:11:33.061226 | orchestrator | }, 2025-06-02 01:11:33.061237 | orchestrator | "rgw": { 2025-06-02 01:11:33.061254 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 01:11:33.061273 | orchestrator | }, 2025-06-02 01:11:33.061291 | orchestrator | "overall": { 2025-06-02 01:11:33.061310 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-02 01:11:33.061328 | orchestrator | } 2025-06-02 01:11:33.061347 | orchestrator | } 2025-06-02 01:11:33.124746 | orchestrator | 2025-06-02 01:11:33.124863 | orchestrator | # Ceph OSD tree 2025-06-02 01:11:33.124887 | orchestrator | 2025-06-02 01:11:33.124902 | orchestrator | + echo 2025-06-02 01:11:33.124915 | orchestrator | + echo '# Ceph OSD tree' 2025-06-02 01:11:33.124929 | orchestrator | + echo 2025-06-02 01:11:33.124942 | orchestrator | + ceph osd df tree 2025-06-02 01:11:33.624634 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-02 01:11:33.691043 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 425 MiB 113 GiB 5.91 1.00 - root default 2025-06-02 01:11:33.691106 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2025-06-02 01:11:33.691116 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.67 0.96 196 up osd.0 2025-06-02 01:11:33.691124 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.14 1.04 212 up osd.4 2025-06-02 01:11:33.691131 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-02 01:11:33.691138 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.71 1.13 191 up osd.1 2025-06-02 01:11:33.691146 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.0 GiB 979 MiB 1 KiB 70 MiB 19 GiB 5.13 0.87 213 up osd.3 2025-06-02 01:11:33.691153 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-02 01:11:33.691160 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.44 0.92 203 up osd.2 2025-06-02 01:11:33.691167 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.39 1.08 203 up osd.5 2025-06-02 01:11:33.691188 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 425 MiB 113 GiB 5.91 2025-06-02 01:11:33.691195 | orchestrator | MIN/MAX VAR: 0.87/1.13 STDDEV: 0.55 2025-06-02 01:11:33.691237 | orchestrator | 2025-06-02 01:11:33.691246 | orchestrator | # Ceph monitor status 2025-06-02 01:11:33.691254 | orchestrator | 2025-06-02 01:11:33.691261 | orchestrator | + echo 2025-06-02 01:11:33.691268 | orchestrator | + echo '# Ceph monitor status' 2025-06-02 01:11:33.691274 | orchestrator | + echo 2025-06-02 01:11:33.691281 | orchestrator | + ceph mon stat 2025-06-02 01:11:34.276521 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-02 01:11:34.320881 | orchestrator | 2025-06-02 01:11:34.320971 | orchestrator | # Ceph quorum status 2025-06-02 01:11:34.320985 | orchestrator | 2025-06-02 01:11:34.320996 | orchestrator | + echo 2025-06-02 01:11:34.321008 | orchestrator | + echo '# Ceph quorum status' 2025-06-02 01:11:34.321019 | orchestrator | + echo 2025-06-02 01:11:34.321030 | orchestrator | + ceph quorum_status 2025-06-02 01:11:34.322601 | orchestrator | + jq 2025-06-02 01:11:34.949264 | orchestrator | { 2025-06-02 01:11:34.949359 | orchestrator | "election_epoch": 8, 2025-06-02 01:11:34.949375 | orchestrator | "quorum": [ 2025-06-02 01:11:34.949409 | orchestrator | 0, 2025-06-02 01:11:34.949421 | orchestrator | 1, 2025-06-02 01:11:34.949432 | orchestrator | 2 2025-06-02 01:11:34.949442 | orchestrator | ], 2025-06-02 01:11:34.949453 | orchestrator | "quorum_names": [ 2025-06-02 01:11:34.949464 | orchestrator | "testbed-node-0", 2025-06-02 01:11:34.949475 | orchestrator | "testbed-node-1", 2025-06-02 01:11:34.949486 | orchestrator | "testbed-node-2" 2025-06-02 01:11:34.949497 | orchestrator | ], 2025-06-02 01:11:34.949508 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-02 01:11:34.949520 | orchestrator | "quorum_age": 1550, 2025-06-02 01:11:34.949531 | orchestrator | "features": { 2025-06-02 01:11:34.949542 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-02 01:11:34.949553 | orchestrator | "quorum_mon": [ 2025-06-02 01:11:34.949564 | orchestrator | "kraken", 2025-06-02 01:11:34.949574 | orchestrator | "luminous", 2025-06-02 01:11:34.949585 | orchestrator | "mimic", 2025-06-02 01:11:34.949596 | orchestrator | "osdmap-prune", 2025-06-02 01:11:34.949607 | orchestrator | "nautilus", 2025-06-02 01:11:34.949617 | orchestrator | "octopus", 2025-06-02 01:11:34.949628 | orchestrator | "pacific", 2025-06-02 01:11:34.949639 | orchestrator | "elector-pinging", 2025-06-02 01:11:34.949650 | orchestrator | "quincy", 2025-06-02 01:11:34.949660 | orchestrator | "reef" 2025-06-02 01:11:34.949671 | orchestrator | ] 2025-06-02 01:11:34.949682 | orchestrator | }, 2025-06-02 01:11:34.949693 | orchestrator | "monmap": { 2025-06-02 01:11:34.949704 | orchestrator | "epoch": 1, 2025-06-02 01:11:34.949714 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-02 01:11:34.949726 | orchestrator | "modified": "2025-06-02T00:45:27.519352Z", 2025-06-02 01:11:34.949737 | orchestrator | "created": "2025-06-02T00:45:27.519352Z", 2025-06-02 01:11:34.949747 | orchestrator | "min_mon_release": 18, 2025-06-02 01:11:34.949758 | orchestrator | "min_mon_release_name": "reef", 2025-06-02 01:11:34.949769 | orchestrator | "election_strategy": 1, 2025-06-02 01:11:34.949780 | orchestrator | "disallowed_leaders: ": "", 2025-06-02 01:11:34.949830 | orchestrator | "stretch_mode": false, 2025-06-02 01:11:34.949845 | orchestrator | "tiebreaker_mon": "", 2025-06-02 01:11:34.949858 | orchestrator | "removed_ranks: ": "", 2025-06-02 01:11:34.949870 | orchestrator | "features": { 2025-06-02 01:11:34.949883 | orchestrator | "persistent": [ 2025-06-02 01:11:34.949894 | orchestrator | "kraken", 2025-06-02 01:11:34.949907 | orchestrator | "luminous", 2025-06-02 01:11:34.949919 | orchestrator | "mimic", 2025-06-02 01:11:34.949932 | orchestrator | "osdmap-prune", 2025-06-02 01:11:34.949944 | orchestrator | "nautilus", 2025-06-02 01:11:34.949957 | orchestrator | "octopus", 2025-06-02 01:11:34.949969 | orchestrator | "pacific", 2025-06-02 01:11:34.949981 | orchestrator | "elector-pinging", 2025-06-02 01:11:34.949993 | orchestrator | "quincy", 2025-06-02 01:11:34.950006 | orchestrator | "reef" 2025-06-02 01:11:34.950065 | orchestrator | ], 2025-06-02 01:11:34.950079 | orchestrator | "optional": [] 2025-06-02 01:11:34.950091 | orchestrator | }, 2025-06-02 01:11:34.950104 | orchestrator | "mons": [ 2025-06-02 01:11:34.950117 | orchestrator | { 2025-06-02 01:11:34.950129 | orchestrator | "rank": 0, 2025-06-02 01:11:34.950142 | orchestrator | "name": "testbed-node-0", 2025-06-02 01:11:34.950154 | orchestrator | "public_addrs": { 2025-06-02 01:11:34.950166 | orchestrator | "addrvec": [ 2025-06-02 01:11:34.950179 | orchestrator | { 2025-06-02 01:11:34.950191 | orchestrator | "type": "v2", 2025-06-02 01:11:34.950202 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-02 01:11:34.950213 | orchestrator | "nonce": 0 2025-06-02 01:11:34.950223 | orchestrator | }, 2025-06-02 01:11:34.950234 | orchestrator | { 2025-06-02 01:11:34.950245 | orchestrator | "type": "v1", 2025-06-02 01:11:34.950255 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-02 01:11:34.950267 | orchestrator | "nonce": 0 2025-06-02 01:11:34.950278 | orchestrator | } 2025-06-02 01:11:34.950288 | orchestrator | ] 2025-06-02 01:11:34.950299 | orchestrator | }, 2025-06-02 01:11:34.950310 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-02 01:11:34.950321 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-02 01:11:34.950332 | orchestrator | "priority": 0, 2025-06-02 01:11:34.950342 | orchestrator | "weight": 0, 2025-06-02 01:11:34.950353 | orchestrator | "crush_location": "{}" 2025-06-02 01:11:34.950364 | orchestrator | }, 2025-06-02 01:11:34.950374 | orchestrator | { 2025-06-02 01:11:34.950385 | orchestrator | "rank": 1, 2025-06-02 01:11:34.950395 | orchestrator | "name": "testbed-node-1", 2025-06-02 01:11:34.950406 | orchestrator | "public_addrs": { 2025-06-02 01:11:34.950424 | orchestrator | "addrvec": [ 2025-06-02 01:11:34.950435 | orchestrator | { 2025-06-02 01:11:34.950446 | orchestrator | "type": "v2", 2025-06-02 01:11:34.950456 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-02 01:11:34.950467 | orchestrator | "nonce": 0 2025-06-02 01:11:34.950478 | orchestrator | }, 2025-06-02 01:11:34.950488 | orchestrator | { 2025-06-02 01:11:34.950499 | orchestrator | "type": "v1", 2025-06-02 01:11:34.950510 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-02 01:11:34.950520 | orchestrator | "nonce": 0 2025-06-02 01:11:34.950531 | orchestrator | } 2025-06-02 01:11:34.950542 | orchestrator | ] 2025-06-02 01:11:34.950553 | orchestrator | }, 2025-06-02 01:11:34.950563 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-02 01:11:34.950574 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-02 01:11:34.950585 | orchestrator | "priority": 0, 2025-06-02 01:11:34.950595 | orchestrator | "weight": 0, 2025-06-02 01:11:34.950606 | orchestrator | "crush_location": "{}" 2025-06-02 01:11:34.950617 | orchestrator | }, 2025-06-02 01:11:34.950627 | orchestrator | { 2025-06-02 01:11:34.950638 | orchestrator | "rank": 2, 2025-06-02 01:11:34.950649 | orchestrator | "name": "testbed-node-2", 2025-06-02 01:11:34.950659 | orchestrator | "public_addrs": { 2025-06-02 01:11:34.950670 | orchestrator | "addrvec": [ 2025-06-02 01:11:34.950681 | orchestrator | { 2025-06-02 01:11:34.950692 | orchestrator | "type": "v2", 2025-06-02 01:11:34.950702 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-02 01:11:34.950726 | orchestrator | "nonce": 0 2025-06-02 01:11:34.950737 | orchestrator | }, 2025-06-02 01:11:34.950748 | orchestrator | { 2025-06-02 01:11:34.950758 | orchestrator | "type": "v1", 2025-06-02 01:11:34.950769 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-02 01:11:34.950780 | orchestrator | "nonce": 0 2025-06-02 01:11:34.950812 | orchestrator | } 2025-06-02 01:11:34.950824 | orchestrator | ] 2025-06-02 01:11:34.950835 | orchestrator | }, 2025-06-02 01:11:34.950846 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-02 01:11:34.950856 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-02 01:11:34.950867 | orchestrator | "priority": 0, 2025-06-02 01:11:34.950878 | orchestrator | "weight": 0, 2025-06-02 01:11:34.950889 | orchestrator | "crush_location": "{}" 2025-06-02 01:11:34.950899 | orchestrator | } 2025-06-02 01:11:34.950910 | orchestrator | ] 2025-06-02 01:11:34.950921 | orchestrator | } 2025-06-02 01:11:34.950931 | orchestrator | } 2025-06-02 01:11:34.950942 | orchestrator | 2025-06-02 01:11:34.950954 | orchestrator | # Ceph free space status 2025-06-02 01:11:34.950964 | orchestrator | 2025-06-02 01:11:34.950975 | orchestrator | + echo 2025-06-02 01:11:34.950986 | orchestrator | + echo '# Ceph free space status' 2025-06-02 01:11:34.950997 | orchestrator | + echo 2025-06-02 01:11:34.951008 | orchestrator | + ceph df 2025-06-02 01:11:35.545872 | orchestrator | --- RAW STORAGE --- 2025-06-02 01:11:35.545968 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-02 01:11:35.545998 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-06-02 01:11:35.546013 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-06-02 01:11:35.546073 | orchestrator | 2025-06-02 01:11:35.546082 | orchestrator | --- POOLS --- 2025-06-02 01:11:35.546091 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-02 01:11:35.546100 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-06-02 01:11:35.546108 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-02 01:11:35.546116 | orchestrator | cephfs_metadata 3 32 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-02 01:11:35.546124 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-02 01:11:35.546132 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-02 01:11:35.546140 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-02 01:11:35.546148 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2025-06-02 01:11:35.546156 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-02 01:11:35.546163 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-06-02 01:11:35.546192 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 01:11:35.546200 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 01:11:35.546209 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.92 35 GiB 2025-06-02 01:11:35.546217 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 01:11:35.546225 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 01:11:35.598301 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 01:11:35.644183 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 01:11:35.644253 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-02 01:11:35.644267 | orchestrator | + osism apply facts 2025-06-02 01:11:37.383350 | orchestrator | Registering Redlock._acquired_script 2025-06-02 01:11:37.383420 | orchestrator | Registering Redlock._extend_script 2025-06-02 01:11:37.383426 | orchestrator | Registering Redlock._release_script 2025-06-02 01:11:37.440601 | orchestrator | 2025-06-02 01:11:37 | INFO  | Task be831051-f982-499a-aaf8-e74ca51cc084 (facts) was prepared for execution. 2025-06-02 01:11:37.440696 | orchestrator | 2025-06-02 01:11:37 | INFO  | It takes a moment until task be831051-f982-499a-aaf8-e74ca51cc084 (facts) has been started and output is visible here. 2025-06-02 01:11:41.541444 | orchestrator | 2025-06-02 01:11:41.541564 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 01:11:41.542776 | orchestrator | 2025-06-02 01:11:41.544171 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 01:11:41.545907 | orchestrator | Monday 02 June 2025 01:11:41 +0000 (0:00:00.268) 0:00:00.268 *********** 2025-06-02 01:11:42.636019 | orchestrator | ok: [testbed-manager] 2025-06-02 01:11:42.636123 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:11:42.641482 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:11:42.641537 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:11:42.645542 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:11:42.645639 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:11:42.646498 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:11:42.647744 | orchestrator | 2025-06-02 01:11:42.648195 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 01:11:42.649249 | orchestrator | Monday 02 June 2025 01:11:42 +0000 (0:00:01.093) 0:00:01.362 *********** 2025-06-02 01:11:42.806879 | orchestrator | skipping: [testbed-manager] 2025-06-02 01:11:42.895158 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:11:42.975659 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:11:43.062005 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:11:43.135423 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:11:43.839710 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:11:43.841368 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:11:43.841847 | orchestrator | 2025-06-02 01:11:43.842725 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 01:11:43.843367 | orchestrator | 2025-06-02 01:11:43.843885 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 01:11:43.844712 | orchestrator | Monday 02 June 2025 01:11:43 +0000 (0:00:01.205) 0:00:02.568 *********** 2025-06-02 01:11:49.865308 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:11:49.865435 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:11:49.866231 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:11:49.866541 | orchestrator | ok: [testbed-manager] 2025-06-02 01:11:49.867260 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:11:49.868664 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:11:49.869547 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:11:49.870387 | orchestrator | 2025-06-02 01:11:49.871005 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 01:11:49.872125 | orchestrator | 2025-06-02 01:11:49.872367 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 01:11:49.873246 | orchestrator | Monday 02 June 2025 01:11:49 +0000 (0:00:06.028) 0:00:08.596 *********** 2025-06-02 01:11:50.030571 | orchestrator | skipping: [testbed-manager] 2025-06-02 01:11:50.114452 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:11:50.197565 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:11:50.282213 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:11:50.356938 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:11:50.397639 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:11:50.398928 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:11:50.399706 | orchestrator | 2025-06-02 01:11:50.401269 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:11:50.401434 | orchestrator | 2025-06-02 01:11:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 01:11:50.401775 | orchestrator | 2025-06-02 01:11:50 | INFO  | Please wait and do not abort execution. 2025-06-02 01:11:50.402714 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 01:11:50.403661 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 01:11:50.404509 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 01:11:50.405283 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 01:11:50.406066 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 01:11:50.406695 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 01:11:50.407605 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 01:11:50.408004 | orchestrator | 2025-06-02 01:11:50.408536 | orchestrator | 2025-06-02 01:11:50.409473 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:11:50.409760 | orchestrator | Monday 02 June 2025 01:11:50 +0000 (0:00:00.533) 0:00:09.130 *********** 2025-06-02 01:11:50.410296 | orchestrator | =============================================================================== 2025-06-02 01:11:50.411161 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.03s 2025-06-02 01:11:50.412045 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2025-06-02 01:11:50.412426 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-06-02 01:11:50.413993 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-06-02 01:11:51.113105 | orchestrator | + osism validate ceph-mons 2025-06-02 01:11:52.823311 | orchestrator | Registering Redlock._acquired_script 2025-06-02 01:11:52.823418 | orchestrator | Registering Redlock._extend_script 2025-06-02 01:11:52.823433 | orchestrator | Registering Redlock._release_script 2025-06-02 01:12:10.580450 | orchestrator | 2025-06-02 01:12:10.580568 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-02 01:12:10.580586 | orchestrator | 2025-06-02 01:12:10.580599 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 01:12:10.580610 | orchestrator | Monday 02 June 2025 01:11:56 +0000 (0:00:00.328) 0:00:00.328 *********** 2025-06-02 01:12:10.580622 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:10.580633 | orchestrator | 2025-06-02 01:12:10.580644 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 01:12:10.580655 | orchestrator | Monday 02 June 2025 01:11:57 +0000 (0:00:00.553) 0:00:00.882 *********** 2025-06-02 01:12:10.580666 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:10.580676 | orchestrator | 2025-06-02 01:12:10.580728 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 01:12:10.580767 | orchestrator | Monday 02 June 2025 01:11:57 +0000 (0:00:00.622) 0:00:01.504 *********** 2025-06-02 01:12:10.580779 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.580791 | orchestrator | 2025-06-02 01:12:10.580856 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-02 01:12:10.580869 | orchestrator | Monday 02 June 2025 01:11:58 +0000 (0:00:00.203) 0:00:01.708 *********** 2025-06-02 01:12:10.580880 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.580890 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:12:10.580902 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:12:10.580912 | orchestrator | 2025-06-02 01:12:10.580923 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-02 01:12:10.580933 | orchestrator | Monday 02 June 2025 01:11:58 +0000 (0:00:00.267) 0:00:01.976 *********** 2025-06-02 01:12:10.580944 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:12:10.580954 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:12:10.580965 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.580976 | orchestrator | 2025-06-02 01:12:10.580988 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-02 01:12:10.581001 | orchestrator | Monday 02 June 2025 01:11:59 +0000 (0:00:00.911) 0:00:02.887 *********** 2025-06-02 01:12:10.581014 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.581028 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:12:10.581041 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:12:10.581053 | orchestrator | 2025-06-02 01:12:10.581066 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-02 01:12:10.581079 | orchestrator | Monday 02 June 2025 01:11:59 +0000 (0:00:00.262) 0:00:03.149 *********** 2025-06-02 01:12:10.581091 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.581104 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:12:10.581116 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:12:10.581129 | orchestrator | 2025-06-02 01:12:10.581142 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 01:12:10.581169 | orchestrator | Monday 02 June 2025 01:11:59 +0000 (0:00:00.415) 0:00:03.564 *********** 2025-06-02 01:12:10.581182 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.581194 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:12:10.581207 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:12:10.581220 | orchestrator | 2025-06-02 01:12:10.581233 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-02 01:12:10.581247 | orchestrator | Monday 02 June 2025 01:12:00 +0000 (0:00:00.283) 0:00:03.848 *********** 2025-06-02 01:12:10.581258 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.581269 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:12:10.581279 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:12:10.581290 | orchestrator | 2025-06-02 01:12:10.581300 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-02 01:12:10.581311 | orchestrator | Monday 02 June 2025 01:12:00 +0000 (0:00:00.271) 0:00:04.119 *********** 2025-06-02 01:12:10.581322 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.581333 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:12:10.581343 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:12:10.581354 | orchestrator | 2025-06-02 01:12:10.581364 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 01:12:10.581375 | orchestrator | Monday 02 June 2025 01:12:00 +0000 (0:00:00.282) 0:00:04.401 *********** 2025-06-02 01:12:10.581385 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.581396 | orchestrator | 2025-06-02 01:12:10.581406 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 01:12:10.581417 | orchestrator | Monday 02 June 2025 01:12:01 +0000 (0:00:00.477) 0:00:04.878 *********** 2025-06-02 01:12:10.581428 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.581438 | orchestrator | 2025-06-02 01:12:10.581449 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 01:12:10.581468 | orchestrator | Monday 02 June 2025 01:12:01 +0000 (0:00:00.213) 0:00:05.092 *********** 2025-06-02 01:12:10.581479 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.581489 | orchestrator | 2025-06-02 01:12:10.581500 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:10.581511 | orchestrator | Monday 02 June 2025 01:12:01 +0000 (0:00:00.218) 0:00:05.311 *********** 2025-06-02 01:12:10.581522 | orchestrator | 2025-06-02 01:12:10.581532 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:10.581543 | orchestrator | Monday 02 June 2025 01:12:01 +0000 (0:00:00.062) 0:00:05.373 *********** 2025-06-02 01:12:10.581554 | orchestrator | 2025-06-02 01:12:10.581564 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:10.581575 | orchestrator | Monday 02 June 2025 01:12:01 +0000 (0:00:00.063) 0:00:05.436 *********** 2025-06-02 01:12:10.581585 | orchestrator | 2025-06-02 01:12:10.581596 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 01:12:10.581607 | orchestrator | Monday 02 June 2025 01:12:01 +0000 (0:00:00.066) 0:00:05.502 *********** 2025-06-02 01:12:10.581617 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.581628 | orchestrator | 2025-06-02 01:12:10.581639 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-02 01:12:10.581649 | orchestrator | Monday 02 June 2025 01:12:02 +0000 (0:00:00.224) 0:00:05.727 *********** 2025-06-02 01:12:10.581660 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.581671 | orchestrator | 2025-06-02 01:12:10.581700 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-02 01:12:10.581712 | orchestrator | Monday 02 June 2025 01:12:02 +0000 (0:00:00.224) 0:00:05.951 *********** 2025-06-02 01:12:10.581723 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.581733 | orchestrator | 2025-06-02 01:12:10.581744 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-02 01:12:10.581755 | orchestrator | Monday 02 June 2025 01:12:02 +0000 (0:00:00.101) 0:00:06.052 *********** 2025-06-02 01:12:10.581766 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:12:10.581777 | orchestrator | 2025-06-02 01:12:10.581787 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-02 01:12:10.581798 | orchestrator | Monday 02 June 2025 01:12:03 +0000 (0:00:01.450) 0:00:07.503 *********** 2025-06-02 01:12:10.581828 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.581839 | orchestrator | 2025-06-02 01:12:10.581849 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-02 01:12:10.581860 | orchestrator | Monday 02 June 2025 01:12:04 +0000 (0:00:00.328) 0:00:07.832 *********** 2025-06-02 01:12:10.581870 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.581881 | orchestrator | 2025-06-02 01:12:10.581892 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-02 01:12:10.581902 | orchestrator | Monday 02 June 2025 01:12:04 +0000 (0:00:00.313) 0:00:08.145 *********** 2025-06-02 01:12:10.581913 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.581923 | orchestrator | 2025-06-02 01:12:10.581934 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-02 01:12:10.581945 | orchestrator | Monday 02 June 2025 01:12:04 +0000 (0:00:00.299) 0:00:08.445 *********** 2025-06-02 01:12:10.581955 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.581966 | orchestrator | 2025-06-02 01:12:10.581977 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-02 01:12:10.581987 | orchestrator | Monday 02 June 2025 01:12:05 +0000 (0:00:00.319) 0:00:08.764 *********** 2025-06-02 01:12:10.581997 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.582008 | orchestrator | 2025-06-02 01:12:10.582072 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-02 01:12:10.582086 | orchestrator | Monday 02 June 2025 01:12:05 +0000 (0:00:00.115) 0:00:08.879 *********** 2025-06-02 01:12:10.582096 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.582107 | orchestrator | 2025-06-02 01:12:10.582125 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-02 01:12:10.582136 | orchestrator | Monday 02 June 2025 01:12:05 +0000 (0:00:00.159) 0:00:09.038 *********** 2025-06-02 01:12:10.582181 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.582193 | orchestrator | 2025-06-02 01:12:10.582204 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-02 01:12:10.582215 | orchestrator | Monday 02 June 2025 01:12:05 +0000 (0:00:00.108) 0:00:09.147 *********** 2025-06-02 01:12:10.582226 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:12:10.582236 | orchestrator | 2025-06-02 01:12:10.582247 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-02 01:12:10.582258 | orchestrator | Monday 02 June 2025 01:12:06 +0000 (0:00:01.326) 0:00:10.474 *********** 2025-06-02 01:12:10.582268 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.582279 | orchestrator | 2025-06-02 01:12:10.582289 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-02 01:12:10.582300 | orchestrator | Monday 02 June 2025 01:12:07 +0000 (0:00:00.299) 0:00:10.773 *********** 2025-06-02 01:12:10.582311 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.582321 | orchestrator | 2025-06-02 01:12:10.582332 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-02 01:12:10.582342 | orchestrator | Monday 02 June 2025 01:12:07 +0000 (0:00:00.136) 0:00:10.910 *********** 2025-06-02 01:12:10.582353 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:10.582364 | orchestrator | 2025-06-02 01:12:10.582374 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-02 01:12:10.582385 | orchestrator | Monday 02 June 2025 01:12:07 +0000 (0:00:00.151) 0:00:11.061 *********** 2025-06-02 01:12:10.582395 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.582406 | orchestrator | 2025-06-02 01:12:10.582417 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-02 01:12:10.582427 | orchestrator | Monday 02 June 2025 01:12:07 +0000 (0:00:00.134) 0:00:11.196 *********** 2025-06-02 01:12:10.582438 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.582448 | orchestrator | 2025-06-02 01:12:10.582459 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 01:12:10.582469 | orchestrator | Monday 02 June 2025 01:12:07 +0000 (0:00:00.281) 0:00:11.478 *********** 2025-06-02 01:12:10.582480 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:10.582491 | orchestrator | 2025-06-02 01:12:10.582501 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 01:12:10.582512 | orchestrator | Monday 02 June 2025 01:12:08 +0000 (0:00:00.276) 0:00:11.754 *********** 2025-06-02 01:12:10.582522 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:10.582533 | orchestrator | 2025-06-02 01:12:10.582543 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 01:12:10.582554 | orchestrator | Monday 02 June 2025 01:12:08 +0000 (0:00:00.228) 0:00:11.983 *********** 2025-06-02 01:12:10.582564 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:10.582575 | orchestrator | 2025-06-02 01:12:10.582586 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 01:12:10.582596 | orchestrator | Monday 02 June 2025 01:12:09 +0000 (0:00:01.514) 0:00:13.498 *********** 2025-06-02 01:12:10.582607 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:10.582617 | orchestrator | 2025-06-02 01:12:10.582628 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 01:12:10.582638 | orchestrator | Monday 02 June 2025 01:12:10 +0000 (0:00:00.237) 0:00:13.735 *********** 2025-06-02 01:12:10.582649 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:10.582659 | orchestrator | 2025-06-02 01:12:10.582678 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:13.109216 | orchestrator | Monday 02 June 2025 01:12:10 +0000 (0:00:00.245) 0:00:13.981 *********** 2025-06-02 01:12:13.109331 | orchestrator | 2025-06-02 01:12:13.109344 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:13.109353 | orchestrator | Monday 02 June 2025 01:12:10 +0000 (0:00:00.067) 0:00:14.048 *********** 2025-06-02 01:12:13.109362 | orchestrator | 2025-06-02 01:12:13.109370 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:13.109377 | orchestrator | Monday 02 June 2025 01:12:10 +0000 (0:00:00.066) 0:00:14.114 *********** 2025-06-02 01:12:13.109385 | orchestrator | 2025-06-02 01:12:13.109396 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 01:12:13.109404 | orchestrator | Monday 02 June 2025 01:12:10 +0000 (0:00:00.070) 0:00:14.185 *********** 2025-06-02 01:12:13.109412 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:13.109420 | orchestrator | 2025-06-02 01:12:13.109428 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 01:12:13.109436 | orchestrator | Monday 02 June 2025 01:12:12 +0000 (0:00:01.533) 0:00:15.719 *********** 2025-06-02 01:12:13.109444 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-02 01:12:13.109452 | orchestrator |  "msg": [ 2025-06-02 01:12:13.109460 | orchestrator |  "Validator run completed.", 2025-06-02 01:12:13.109469 | orchestrator |  "You can find the report file here:", 2025-06-02 01:12:13.109477 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-02T01:11:57+00:00-report.json", 2025-06-02 01:12:13.109485 | orchestrator |  "on the following host:", 2025-06-02 01:12:13.109493 | orchestrator |  "testbed-manager" 2025-06-02 01:12:13.109501 | orchestrator |  ] 2025-06-02 01:12:13.109509 | orchestrator | } 2025-06-02 01:12:13.109517 | orchestrator | 2025-06-02 01:12:13.109525 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:12:13.109534 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-02 01:12:13.109557 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 01:12:13.109566 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 01:12:13.109574 | orchestrator | 2025-06-02 01:12:13.109582 | orchestrator | 2025-06-02 01:12:13.109594 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:12:13.109602 | orchestrator | Monday 02 June 2025 01:12:12 +0000 (0:00:00.622) 0:00:16.341 *********** 2025-06-02 01:12:13.109610 | orchestrator | =============================================================================== 2025-06-02 01:12:13.109617 | orchestrator | Write report file ------------------------------------------------------- 1.53s 2025-06-02 01:12:13.109625 | orchestrator | Aggregate test results step one ----------------------------------------- 1.51s 2025-06-02 01:12:13.109633 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.45s 2025-06-02 01:12:13.109641 | orchestrator | Gather status data ------------------------------------------------------ 1.33s 2025-06-02 01:12:13.109648 | orchestrator | Get container info ------------------------------------------------------ 0.91s 2025-06-02 01:12:13.109656 | orchestrator | Print report file information ------------------------------------------- 0.62s 2025-06-02 01:12:13.109664 | orchestrator | Create report output directory ------------------------------------------ 0.62s 2025-06-02 01:12:13.109672 | orchestrator | Get timestamp for report file ------------------------------------------- 0.55s 2025-06-02 01:12:13.109680 | orchestrator | Aggregate test results step one ----------------------------------------- 0.48s 2025-06-02 01:12:13.109687 | orchestrator | Set test result to passed if container is existing ---------------------- 0.42s 2025-06-02 01:12:13.109695 | orchestrator | Set quorum test data ---------------------------------------------------- 0.33s 2025-06-02 01:12:13.109709 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2025-06-02 01:12:13.109717 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.31s 2025-06-02 01:12:13.109724 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.30s 2025-06-02 01:12:13.109732 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-06-02 01:12:13.109740 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2025-06-02 01:12:13.109748 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.28s 2025-06-02 01:12:13.109756 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.28s 2025-06-02 01:12:13.109763 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2025-06-02 01:12:13.109774 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.27s 2025-06-02 01:12:13.334294 | orchestrator | + osism validate ceph-mgrs 2025-06-02 01:12:15.025860 | orchestrator | Registering Redlock._acquired_script 2025-06-02 01:12:15.025961 | orchestrator | Registering Redlock._extend_script 2025-06-02 01:12:15.025977 | orchestrator | Registering Redlock._release_script 2025-06-02 01:12:33.849362 | orchestrator | 2025-06-02 01:12:33.849455 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-02 01:12:33.849467 | orchestrator | 2025-06-02 01:12:33.849474 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 01:12:33.849480 | orchestrator | Monday 02 June 2025 01:12:19 +0000 (0:00:00.419) 0:00:00.419 *********** 2025-06-02 01:12:33.849488 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:33.849494 | orchestrator | 2025-06-02 01:12:33.849500 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 01:12:33.849506 | orchestrator | Monday 02 June 2025 01:12:20 +0000 (0:00:00.591) 0:00:01.011 *********** 2025-06-02 01:12:33.849512 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:33.849518 | orchestrator | 2025-06-02 01:12:33.849523 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 01:12:33.849529 | orchestrator | Monday 02 June 2025 01:12:20 +0000 (0:00:00.888) 0:00:01.899 *********** 2025-06-02 01:12:33.849535 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:33.849542 | orchestrator | 2025-06-02 01:12:33.849549 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-02 01:12:33.849555 | orchestrator | Monday 02 June 2025 01:12:21 +0000 (0:00:00.236) 0:00:02.136 *********** 2025-06-02 01:12:33.849560 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:33.849566 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:12:33.849572 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:12:33.849578 | orchestrator | 2025-06-02 01:12:33.849584 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-02 01:12:33.849590 | orchestrator | Monday 02 June 2025 01:12:21 +0000 (0:00:00.288) 0:00:02.424 *********** 2025-06-02 01:12:33.849596 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:33.849602 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:12:33.849608 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:12:33.849614 | orchestrator | 2025-06-02 01:12:33.849620 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-02 01:12:33.849627 | orchestrator | Monday 02 June 2025 01:12:22 +0000 (0:00:00.969) 0:00:03.394 *********** 2025-06-02 01:12:33.849633 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:33.849641 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:12:33.849646 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:12:33.849653 | orchestrator | 2025-06-02 01:12:33.849659 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-02 01:12:33.849665 | orchestrator | Monday 02 June 2025 01:12:22 +0000 (0:00:00.306) 0:00:03.700 *********** 2025-06-02 01:12:33.849670 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:33.849676 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:12:33.849682 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:12:33.849707 | orchestrator | 2025-06-02 01:12:33.849713 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 01:12:33.849719 | orchestrator | Monday 02 June 2025 01:12:23 +0000 (0:00:00.558) 0:00:04.259 *********** 2025-06-02 01:12:33.849725 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:33.849730 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:12:33.849736 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:12:33.849742 | orchestrator | 2025-06-02 01:12:33.849748 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-02 01:12:33.849765 | orchestrator | Monday 02 June 2025 01:12:23 +0000 (0:00:00.315) 0:00:04.574 *********** 2025-06-02 01:12:33.849771 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:33.849777 | orchestrator | skipping: [testbed-node-1] 2025-06-02 01:12:33.849782 | orchestrator | skipping: [testbed-node-2] 2025-06-02 01:12:33.849788 | orchestrator | 2025-06-02 01:12:33.849794 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-02 01:12:33.849800 | orchestrator | Monday 02 June 2025 01:12:23 +0000 (0:00:00.311) 0:00:04.886 *********** 2025-06-02 01:12:33.849806 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:33.849865 | orchestrator | ok: [testbed-node-1] 2025-06-02 01:12:33.849870 | orchestrator | ok: [testbed-node-2] 2025-06-02 01:12:33.849877 | orchestrator | 2025-06-02 01:12:33.849882 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 01:12:33.849889 | orchestrator | Monday 02 June 2025 01:12:24 +0000 (0:00:00.309) 0:00:05.195 *********** 2025-06-02 01:12:33.849895 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:33.849900 | orchestrator | 2025-06-02 01:12:33.849906 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 01:12:33.849913 | orchestrator | Monday 02 June 2025 01:12:24 +0000 (0:00:00.695) 0:00:05.891 *********** 2025-06-02 01:12:33.849919 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:33.849925 | orchestrator | 2025-06-02 01:12:33.849930 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 01:12:33.849937 | orchestrator | Monday 02 June 2025 01:12:25 +0000 (0:00:00.239) 0:00:06.130 *********** 2025-06-02 01:12:33.849944 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:33.849950 | orchestrator | 2025-06-02 01:12:33.849956 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:33.849962 | orchestrator | Monday 02 June 2025 01:12:25 +0000 (0:00:00.255) 0:00:06.385 *********** 2025-06-02 01:12:33.849968 | orchestrator | 2025-06-02 01:12:33.849974 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:33.849980 | orchestrator | Monday 02 June 2025 01:12:25 +0000 (0:00:00.073) 0:00:06.459 *********** 2025-06-02 01:12:33.849986 | orchestrator | 2025-06-02 01:12:33.849992 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:33.849998 | orchestrator | Monday 02 June 2025 01:12:25 +0000 (0:00:00.074) 0:00:06.534 *********** 2025-06-02 01:12:33.850004 | orchestrator | 2025-06-02 01:12:33.850009 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 01:12:33.850061 | orchestrator | Monday 02 June 2025 01:12:25 +0000 (0:00:00.073) 0:00:06.607 *********** 2025-06-02 01:12:33.850068 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:33.850075 | orchestrator | 2025-06-02 01:12:33.850082 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-02 01:12:33.850088 | orchestrator | Monday 02 June 2025 01:12:25 +0000 (0:00:00.237) 0:00:06.844 *********** 2025-06-02 01:12:33.850095 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:33.850101 | orchestrator | 2025-06-02 01:12:33.850124 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-02 01:12:33.850133 | orchestrator | Monday 02 June 2025 01:12:26 +0000 (0:00:00.244) 0:00:07.089 *********** 2025-06-02 01:12:33.850138 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:33.850144 | orchestrator | 2025-06-02 01:12:33.850151 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-02 01:12:33.850165 | orchestrator | Monday 02 June 2025 01:12:26 +0000 (0:00:00.109) 0:00:07.199 *********** 2025-06-02 01:12:33.850172 | orchestrator | changed: [testbed-node-0] 2025-06-02 01:12:33.850178 | orchestrator | 2025-06-02 01:12:33.850184 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-02 01:12:33.850191 | orchestrator | Monday 02 June 2025 01:12:28 +0000 (0:00:01.825) 0:00:09.024 *********** 2025-06-02 01:12:33.850197 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:33.850203 | orchestrator | 2025-06-02 01:12:33.850209 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-02 01:12:33.850215 | orchestrator | Monday 02 June 2025 01:12:28 +0000 (0:00:00.260) 0:00:09.285 *********** 2025-06-02 01:12:33.850220 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:33.850226 | orchestrator | 2025-06-02 01:12:33.850232 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-02 01:12:33.850238 | orchestrator | Monday 02 June 2025 01:12:29 +0000 (0:00:00.720) 0:00:10.006 *********** 2025-06-02 01:12:33.850243 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:33.850250 | orchestrator | 2025-06-02 01:12:33.850255 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-02 01:12:33.850261 | orchestrator | Monday 02 June 2025 01:12:29 +0000 (0:00:00.138) 0:00:10.145 *********** 2025-06-02 01:12:33.850268 | orchestrator | ok: [testbed-node-0] 2025-06-02 01:12:33.850273 | orchestrator | 2025-06-02 01:12:33.850279 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 01:12:33.850285 | orchestrator | Monday 02 June 2025 01:12:29 +0000 (0:00:00.167) 0:00:10.312 *********** 2025-06-02 01:12:33.850291 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:33.850297 | orchestrator | 2025-06-02 01:12:33.850302 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 01:12:33.850308 | orchestrator | Monday 02 June 2025 01:12:29 +0000 (0:00:00.241) 0:00:10.554 *********** 2025-06-02 01:12:33.850315 | orchestrator | skipping: [testbed-node-0] 2025-06-02 01:12:33.850320 | orchestrator | 2025-06-02 01:12:33.850326 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 01:12:33.850332 | orchestrator | Monday 02 June 2025 01:12:29 +0000 (0:00:00.229) 0:00:10.783 *********** 2025-06-02 01:12:33.850337 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:33.850343 | orchestrator | 2025-06-02 01:12:33.850349 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 01:12:33.850355 | orchestrator | Monday 02 June 2025 01:12:31 +0000 (0:00:01.218) 0:00:12.001 *********** 2025-06-02 01:12:33.850360 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:33.850366 | orchestrator | 2025-06-02 01:12:33.850372 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 01:12:33.850378 | orchestrator | Monday 02 June 2025 01:12:31 +0000 (0:00:00.231) 0:00:12.233 *********** 2025-06-02 01:12:33.850383 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:33.850389 | orchestrator | 2025-06-02 01:12:33.850394 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:33.850400 | orchestrator | Monday 02 June 2025 01:12:31 +0000 (0:00:00.234) 0:00:12.468 *********** 2025-06-02 01:12:33.850405 | orchestrator | 2025-06-02 01:12:33.850411 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:33.850417 | orchestrator | Monday 02 June 2025 01:12:31 +0000 (0:00:00.068) 0:00:12.536 *********** 2025-06-02 01:12:33.850422 | orchestrator | 2025-06-02 01:12:33.850428 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:33.850433 | orchestrator | Monday 02 June 2025 01:12:31 +0000 (0:00:00.081) 0:00:12.617 *********** 2025-06-02 01:12:33.850439 | orchestrator | 2025-06-02 01:12:33.850444 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 01:12:33.850450 | orchestrator | Monday 02 June 2025 01:12:31 +0000 (0:00:00.070) 0:00:12.687 *********** 2025-06-02 01:12:33.850461 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:33.850468 | orchestrator | 2025-06-02 01:12:33.850474 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 01:12:33.850480 | orchestrator | Monday 02 June 2025 01:12:33 +0000 (0:00:01.727) 0:00:14.415 *********** 2025-06-02 01:12:33.850485 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-02 01:12:33.850492 | orchestrator |  "msg": [ 2025-06-02 01:12:33.850499 | orchestrator |  "Validator run completed.", 2025-06-02 01:12:33.850506 | orchestrator |  "You can find the report file here:", 2025-06-02 01:12:33.850514 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-02T01:12:19+00:00-report.json", 2025-06-02 01:12:33.850521 | orchestrator |  "on the following host:", 2025-06-02 01:12:33.850526 | orchestrator |  "testbed-manager" 2025-06-02 01:12:33.850533 | orchestrator |  ] 2025-06-02 01:12:33.850539 | orchestrator | } 2025-06-02 01:12:33.850546 | orchestrator | 2025-06-02 01:12:33.850552 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:12:33.850559 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 01:12:33.850567 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 01:12:33.850580 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 01:12:34.124361 | orchestrator | 2025-06-02 01:12:34.124465 | orchestrator | 2025-06-02 01:12:34.124486 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:12:34.124508 | orchestrator | Monday 02 June 2025 01:12:33 +0000 (0:00:00.394) 0:00:14.809 *********** 2025-06-02 01:12:34.124527 | orchestrator | =============================================================================== 2025-06-02 01:12:34.124544 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.83s 2025-06-02 01:12:34.124561 | orchestrator | Write report file ------------------------------------------------------- 1.73s 2025-06-02 01:12:34.124579 | orchestrator | Aggregate test results step one ----------------------------------------- 1.22s 2025-06-02 01:12:34.124598 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-06-02 01:12:34.124616 | orchestrator | Create report output directory ------------------------------------------ 0.89s 2025-06-02 01:12:34.124634 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.72s 2025-06-02 01:12:34.124653 | orchestrator | Aggregate test results step one ----------------------------------------- 0.70s 2025-06-02 01:12:34.124665 | orchestrator | Get timestamp for report file ------------------------------------------- 0.59s 2025-06-02 01:12:34.124676 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2025-06-02 01:12:34.124709 | orchestrator | Print report file information ------------------------------------------- 0.39s 2025-06-02 01:12:34.124721 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-06-02 01:12:34.124732 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2025-06-02 01:12:34.124743 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2025-06-02 01:12:34.124753 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-06-02 01:12:34.124764 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-06-02 01:12:34.124774 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.26s 2025-06-02 01:12:34.124785 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-06-02 01:12:34.124795 | orchestrator | Fail due to missing containers ------------------------------------------ 0.25s 2025-06-02 01:12:34.124912 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.24s 2025-06-02 01:12:34.124928 | orchestrator | Aggregate test results step two ----------------------------------------- 0.24s 2025-06-02 01:12:34.482238 | orchestrator | + osism validate ceph-osds 2025-06-02 01:12:36.194798 | orchestrator | Registering Redlock._acquired_script 2025-06-02 01:12:36.194954 | orchestrator | Registering Redlock._extend_script 2025-06-02 01:12:36.194970 | orchestrator | Registering Redlock._release_script 2025-06-02 01:12:43.716244 | orchestrator | 2025-06-02 01:12:43.716345 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-02 01:12:43.716363 | orchestrator | 2025-06-02 01:12:43.716376 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 01:12:43.716402 | orchestrator | Monday 02 June 2025 01:12:40 +0000 (0:00:00.315) 0:00:00.315 *********** 2025-06-02 01:12:43.716414 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:43.716425 | orchestrator | 2025-06-02 01:12:43.716436 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 01:12:43.716447 | orchestrator | Monday 02 June 2025 01:12:40 +0000 (0:00:00.512) 0:00:00.828 *********** 2025-06-02 01:12:43.716458 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:43.716470 | orchestrator | 2025-06-02 01:12:43.716481 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 01:12:43.716492 | orchestrator | Monday 02 June 2025 01:12:41 +0000 (0:00:00.315) 0:00:01.143 *********** 2025-06-02 01:12:43.716503 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 01:12:43.716514 | orchestrator | 2025-06-02 01:12:43.716525 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 01:12:43.716536 | orchestrator | Monday 02 June 2025 01:12:41 +0000 (0:00:00.757) 0:00:01.901 *********** 2025-06-02 01:12:43.716547 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:43.716559 | orchestrator | 2025-06-02 01:12:43.716570 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-02 01:12:43.716581 | orchestrator | Monday 02 June 2025 01:12:41 +0000 (0:00:00.106) 0:00:02.008 *********** 2025-06-02 01:12:43.716592 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:12:43.716603 | orchestrator | 2025-06-02 01:12:43.716614 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-02 01:12:43.716625 | orchestrator | Monday 02 June 2025 01:12:42 +0000 (0:00:00.119) 0:00:02.127 *********** 2025-06-02 01:12:43.716636 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:12:43.716647 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:12:43.716658 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:12:43.716669 | orchestrator | 2025-06-02 01:12:43.716679 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-02 01:12:43.716690 | orchestrator | Monday 02 June 2025 01:12:42 +0000 (0:00:00.248) 0:00:02.375 *********** 2025-06-02 01:12:43.716701 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:43.716712 | orchestrator | 2025-06-02 01:12:43.716723 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-02 01:12:43.716733 | orchestrator | Monday 02 June 2025 01:12:42 +0000 (0:00:00.129) 0:00:02.505 *********** 2025-06-02 01:12:43.716744 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:43.716755 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:12:43.716766 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:12:43.716777 | orchestrator | 2025-06-02 01:12:43.716788 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-02 01:12:43.716801 | orchestrator | Monday 02 June 2025 01:12:42 +0000 (0:00:00.253) 0:00:02.758 *********** 2025-06-02 01:12:43.716844 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:43.716857 | orchestrator | 2025-06-02 01:12:43.716871 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 01:12:43.716883 | orchestrator | Monday 02 June 2025 01:12:43 +0000 (0:00:00.432) 0:00:03.191 *********** 2025-06-02 01:12:43.716895 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:43.716928 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:12:43.716941 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:12:43.716954 | orchestrator | 2025-06-02 01:12:43.716967 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-02 01:12:43.716981 | orchestrator | Monday 02 June 2025 01:12:43 +0000 (0:00:00.371) 0:00:03.563 *********** 2025-06-02 01:12:43.716996 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5e610eefbe461c697bc34d861554ba55198389b3c8191ef64b6c98049f91342d', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 01:12:43.717011 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8edd9b3922226e622dbfcd3dd0236245f14d55a5b2ef9b3e447065fcf1c056bb', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 01:12:43.717023 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4b7a0d6032e514097ae52ff3d2cacf435b547f412c14ceb5278c2cc9d9262004', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 01:12:43.717037 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f67d7f3b5a5805bf3071009301874f034cef8d3d4189cc9f3a384bab6458dc85', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 01:12:43.717052 | orchestrator | skipping: [testbed-node-3] => (item={'id': '45814883b6380f0c47e35cccbaa26a592f289aa21781d335ccbefb187d933143', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 01:12:43.717082 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7a4d28c2aec92d7d60d58a6ae8da198b1331eb4b24474613174f9a99d81eb316', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 01:12:43.717101 | orchestrator | skipping: [testbed-node-3] => (item={'id': '85f6fbf02a05e5fb9a1a348d31be1d6835d347eff900ac9049a6031f57c0063c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 01:12:43.717123 | orchestrator | skipping: [testbed-node-3] => (item={'id': '84a79a6c3b705fc0935688f893d83634e4f8e9a42386720edd8d0bd1923c79ff', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 01:12:43.717138 | orchestrator | skipping: [testbed-node-3] => (item={'id': '25c9d0446eac2d1b7208ca50c03707a1ff38c59eab65d9e36527e062049d0796', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 01:12:43.717152 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f0dcab59602f494a5b2e3426db71082abd54b500ce85a90c2571b688a846fad8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 19 minutes'})  2025-06-02 01:12:43.717164 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0d64044fadd590d736c3e99c3d266dbb4890a6b37419a0d6c47135cc26eb1347', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-02 01:12:43.717175 | orchestrator | skipping: [testbed-node-3] => (item={'id': '53cdcd66ff59da8981ad9bf90456c5ef6aab8b586699322ffbd67931a2d58481', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 01:12:43.717195 | orchestrator | ok: [testbed-node-3] => (item={'id': '4702e1544df80302ad0e3c47b584943ae1ecabf423c93dd21b3ff611d9c8a438', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 22 minutes'}) 2025-06-02 01:12:43.717206 | orchestrator | ok: [testbed-node-3] => (item={'id': '5c3252f1f0d3e4189dbddb4724638f80eddb8785c14ac043a9de1736e8f90e0f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 22 minutes'}) 2025-06-02 01:12:43.717217 | orchestrator | skipping: [testbed-node-3] => (item={'id': '907ea0384df21721029d3c52f729d015a8a70290c13e05314191584333a6e8c3', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 25 minutes'})  2025-06-02 01:12:43.717229 | orchestrator | skipping: [testbed-node-3] => (item={'id': '07067bae63b15b245f3c71c3b0e2c4552c11ac0e4d3257356175ba189b844601', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 26 minutes (healthy)'})  2025-06-02 01:12:43.717240 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e531072102d2513fcc2ef2174a8caffaa7625c18a5da5f6a05b1c2f3457ec72a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-02 01:12:43.717251 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7f02770a283d4060831270f8b87239a4926c45602ca0c4bb2fe70a62c9ffe15d', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-02 01:12:43.717262 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f4b6de7bc2b9102cc01708c9128db88d0f705a79798e73a67d6319a2193e908e', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-02 01:12:43.717273 | orchestrator | skipping: [testbed-node-3] => (item={'id': '118c83347787ad30d41e7bd1ad8ec825ee9aa8c9ef5e04a12eb2fc518f242304', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 01:12:43.717290 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b2bf5b39123a5d3b7c3bd4458492c27bfc3957a642fd64fffdadfdef7bf76b14', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 01:12:43.982602 | orchestrator | skipping: [testbed-node-4] => (item={'id': '802fd1593a73b9d33c42b63b7561a3971af34000f7f53c22eb31e0b9cfbb00e3', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 01:12:43.982711 | orchestrator | skipping: [testbed-node-4] => (item={'id': '352fa8123cf062898bc14681b568417d0b4d1225d45f548df011a2a0c45913bf', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 01:12:43.982737 | orchestrator | skipping: [testbed-node-4] => (item={'id': '265fb21107fa12d1434d3ca3eeb05d7ee25fe90b105e079d616f34ac9d632655', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 01:12:43.982758 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5beb043d64e7028533f962dd77d7387ccd8e272179b617ecbaa7cd0c0c147839', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 01:12:43.982778 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bd38007309a3b99e77b3474243b288eb2ff4eda53e6ee4aef167f76bc11b0358', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 01:12:43.982895 | orchestrator | skipping: [testbed-node-4] => (item={'id': '59676519633bdfd56abf5b5397d9ff1df9dacaf8d3978fa0e73df1fc77668f49', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 01:12:43.982924 | orchestrator | skipping: [testbed-node-4] => (item={'id': '980698d5d8dfdf0da18fadcdcaf7caef72d76b13516462e67ac81ba04d6786ba', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 01:12:43.982944 | orchestrator | skipping: [testbed-node-4] => (item={'id': '326cc73b02a03d3b3b5c8068f3572065ecd6130fe86782eb16da8724f8c72fe7', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 01:12:43.982966 | orchestrator | skipping: [testbed-node-4] => (item={'id': '71e0392e79337d6eabd780a923170915b2c7d58c12eddfc03590d90933c3d283', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 19 minutes'})  2025-06-02 01:12:43.982988 | orchestrator | skipping: [testbed-node-4] => (item={'id': '46b623feca113afabf29821cb4d3bc9458b6e8eef8044d6034439b45de3e650b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-02 01:12:43.983008 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fd56bcf9a29a0158a4c5a8400d80b0ae6278ed7a8a84d06977bbcaa4f80d5dde', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 01:12:43.983030 | orchestrator | ok: [testbed-node-4] => (item={'id': 'fd90cedc8658e0a6ffe46294a9dc04bb0659e34844496fb66a189f6eee1b4b38', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 22 minutes'}) 2025-06-02 01:12:43.983070 | orchestrator | ok: [testbed-node-4] => (item={'id': '08f6c165db7e35f9a71eae0a44c8279d211c65004d258922abad84fa95e93161', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 22 minutes'}) 2025-06-02 01:12:43.983091 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b7dc7249128e3b628866fe5e110f701d0ab1633a4e0cc443dee50b53238a9be2', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 25 minutes'})  2025-06-02 01:12:43.983144 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9881b4145c24353cab7ba681b233cf7487f60197b7db249d515828885fb63fc4', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 26 minutes (healthy)'})  2025-06-02 01:12:43.983168 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1ded825ed2878ab71cce7d4a77476882117c5bd9c12554235f1337956db39b3d', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-02 01:12:43.983192 | orchestrator | skipping: [testbed-node-4] => (item={'id': '55dde4fe0275e1d747ef87c43a5f75a0c9e88eddcaf397162061c0d38101fd33', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-02 01:12:43.983214 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ade0febcd7ae28d0a0da95515b4aeba0e8b8d7baddfd350bc5b5522776f667f7', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 01:12:43.983248 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a78db58c96c3973814d407cfc48b64dcd72c9d702c926bbcd1d2b45bc10a9d37', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 01:12:43.983270 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7e4e16a5ba4f54d9d7bb16322f355de24df61c1d1ba4f200e9d5ea2d7be12724', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 01:12:43.983292 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7591de1f92eae02e4cf7fcf946df7e8ec871956192b3718ef64a12b3790bb5d2', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 01:12:43.983311 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'de5ebbeeaa575475997a575f0b5f8c21f7847c8779416a089a9866901d76e79c', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 01:12:43.983332 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ae4d7acc0450622597866c686335dddad61c445455813b34931ed30a3d0252cd', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 01:12:43.983351 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1bfdd7c51ff89292d1d9161240626a72043b08baa71cff1e4d6b92ca8c011a6e', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 01:12:43.983370 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0111009a981c900132d8cbf47bf69c8647ae378dfb29438d8dc2bfc27d649157', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 01:12:43.983389 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6bc83798ec2db8e1ee37fce56e064617bf3c0e5c0d6d66133f7cefbdb08ed0ba', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 01:12:43.983408 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ad6dfe01f3cab9396e8484e5335cde8cb473245ab9a3f4cd68d779a6f52fbd7d', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 01:12:43.983430 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1fc0c8adb026d3eaf41570a5879c8c51bd3b10531245b9b7b18dbb5fe9e7c867', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 01:12:43.983448 | orchestrator | skipping: [testbed-node-5] => (item={'id': '092d1fc0c587bf74dc6071deb38cd895c81ef1afa0de14197b26dcba4dadd5b5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 19 minutes'})  2025-06-02 01:12:43.983488 | orchestrator | skipping: [testbed-node-5] => (item={'id': '20099f60654f1f94bc54b484e113db0c8c9a420c88ad6ff5a36d8516276f41fc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-02 01:12:51.534962 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c6716f0537d399e970e508d7a1fb30380e04cdb1260e871438946d0f4b4f35dc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 01:12:51.535087 | orchestrator | ok: [testbed-node-5] => (item={'id': 'f853d9ed92101f9e6ef18545c1224c9d29aa8876ee510f34dea8adc0565212a6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 22 minutes'}) 2025-06-02 01:12:51.535132 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b4a660f1ee2a73bd2a0bd1bdae68dc4a30ee0eb115fed1b255363012eaa47f5f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 22 minutes'}) 2025-06-02 01:12:51.535146 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fb694e19d2112bf4a6048c30ad1193f9ef78a9b1873238ae5ada989af4bf617d', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 25 minutes'})  2025-06-02 01:12:51.535159 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2323c86388d18cca691ae99236e7aceeb666623e577179807463dc28642f500b', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 26 minutes (healthy)'})  2025-06-02 01:12:51.535172 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1abae24677cc174720f6e19456667bde7aa5b987eed8877ec454e9f458740474', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-02 01:12:51.535183 | orchestrator | skipping: [testbed-node-5] => (item={'id': '033d85da748690678a1201d9bd00f9e51f7cbfb9f65a30c95b7bfcd59a54ad8a', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-02 01:12:51.535194 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8805b57a2693c55c2bfa7e2478a37667025bcd725e50c3b3e4ad5459d87b370d', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-02 01:12:51.535206 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1a042fe4a5bd742c0ebbd72dcf1a370e6ea99226baa669a7461fbd914399f5fa', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 01:12:51.535217 | orchestrator | 2025-06-02 01:12:51.535229 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-02 01:12:51.535241 | orchestrator | Monday 02 June 2025 01:12:43 +0000 (0:00:00.477) 0:00:04.040 *********** 2025-06-02 01:12:51.535252 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:51.535264 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:12:51.535274 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:12:51.535285 | orchestrator | 2025-06-02 01:12:51.535296 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-02 01:12:51.535307 | orchestrator | Monday 02 June 2025 01:12:44 +0000 (0:00:00.305) 0:00:04.346 *********** 2025-06-02 01:12:51.535318 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:12:51.535329 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:12:51.535340 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:12:51.535351 | orchestrator | 2025-06-02 01:12:51.535362 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-02 01:12:51.535373 | orchestrator | Monday 02 June 2025 01:12:44 +0000 (0:00:00.386) 0:00:04.732 *********** 2025-06-02 01:12:51.535383 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:51.535394 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:12:51.535405 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:12:51.535415 | orchestrator | 2025-06-02 01:12:51.535426 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 01:12:51.535437 | orchestrator | Monday 02 June 2025 01:12:44 +0000 (0:00:00.305) 0:00:05.038 *********** 2025-06-02 01:12:51.535450 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:51.535463 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:12:51.535481 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:12:51.535501 | orchestrator | 2025-06-02 01:12:51.535529 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-02 01:12:51.535542 | orchestrator | Monday 02 June 2025 01:12:45 +0000 (0:00:00.276) 0:00:05.314 *********** 2025-06-02 01:12:51.535555 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-02 01:12:51.535574 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-02 01:12:51.535594 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:12:51.535630 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-02 01:12:51.535651 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-02 01:12:51.535695 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:12:51.535710 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-02 01:12:51.535722 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-02 01:12:51.535735 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:12:51.535748 | orchestrator | 2025-06-02 01:12:51.535760 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-02 01:12:51.535772 | orchestrator | Monday 02 June 2025 01:12:45 +0000 (0:00:00.298) 0:00:05.613 *********** 2025-06-02 01:12:51.535785 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:51.535798 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:12:51.535843 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:12:51.535857 | orchestrator | 2025-06-02 01:12:51.535868 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-02 01:12:51.535878 | orchestrator | Monday 02 June 2025 01:12:46 +0000 (0:00:00.466) 0:00:06.079 *********** 2025-06-02 01:12:51.535889 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:12:51.535900 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:12:51.535911 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:12:51.535921 | orchestrator | 2025-06-02 01:12:51.535932 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-02 01:12:51.535942 | orchestrator | Monday 02 June 2025 01:12:46 +0000 (0:00:00.282) 0:00:06.361 *********** 2025-06-02 01:12:51.535953 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:12:51.535964 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:12:51.535974 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:12:51.535985 | orchestrator | 2025-06-02 01:12:51.535996 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-02 01:12:51.536006 | orchestrator | Monday 02 June 2025 01:12:46 +0000 (0:00:00.256) 0:00:06.618 *********** 2025-06-02 01:12:51.536017 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:51.536028 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:12:51.536038 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:12:51.536049 | orchestrator | 2025-06-02 01:12:51.536060 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 01:12:51.536070 | orchestrator | Monday 02 June 2025 01:12:46 +0000 (0:00:00.283) 0:00:06.901 *********** 2025-06-02 01:12:51.536081 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:12:51.536092 | orchestrator | 2025-06-02 01:12:51.536102 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 01:12:51.536113 | orchestrator | Monday 02 June 2025 01:12:47 +0000 (0:00:00.586) 0:00:07.488 *********** 2025-06-02 01:12:51.536124 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:12:51.536135 | orchestrator | 2025-06-02 01:12:51.536146 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 01:12:51.536157 | orchestrator | Monday 02 June 2025 01:12:47 +0000 (0:00:00.249) 0:00:07.737 *********** 2025-06-02 01:12:51.536167 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:12:51.536178 | orchestrator | 2025-06-02 01:12:51.536189 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:51.536208 | orchestrator | Monday 02 June 2025 01:12:47 +0000 (0:00:00.233) 0:00:07.971 *********** 2025-06-02 01:12:51.536219 | orchestrator | 2025-06-02 01:12:51.536230 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:51.536240 | orchestrator | Monday 02 June 2025 01:12:47 +0000 (0:00:00.069) 0:00:08.040 *********** 2025-06-02 01:12:51.536251 | orchestrator | 2025-06-02 01:12:51.536262 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:12:51.536272 | orchestrator | Monday 02 June 2025 01:12:48 +0000 (0:00:00.069) 0:00:08.110 *********** 2025-06-02 01:12:51.536283 | orchestrator | 2025-06-02 01:12:51.536294 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 01:12:51.536307 | orchestrator | Monday 02 June 2025 01:12:48 +0000 (0:00:00.073) 0:00:08.184 *********** 2025-06-02 01:12:51.536326 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:12:51.536345 | orchestrator | 2025-06-02 01:12:51.536358 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-02 01:12:51.536368 | orchestrator | Monday 02 June 2025 01:12:48 +0000 (0:00:00.250) 0:00:08.434 *********** 2025-06-02 01:12:51.536379 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:12:51.536389 | orchestrator | 2025-06-02 01:12:51.536400 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 01:12:51.536410 | orchestrator | Monday 02 June 2025 01:12:48 +0000 (0:00:00.220) 0:00:08.654 *********** 2025-06-02 01:12:51.536429 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:51.536449 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:12:51.536466 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:12:51.536477 | orchestrator | 2025-06-02 01:12:51.536495 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-02 01:12:51.536515 | orchestrator | Monday 02 June 2025 01:12:48 +0000 (0:00:00.301) 0:00:08.956 *********** 2025-06-02 01:12:51.536533 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:51.536544 | orchestrator | 2025-06-02 01:12:51.536555 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-02 01:12:51.536565 | orchestrator | Monday 02 June 2025 01:12:49 +0000 (0:00:00.621) 0:00:09.577 *********** 2025-06-02 01:12:51.536576 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 01:12:51.536587 | orchestrator | 2025-06-02 01:12:51.536597 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-02 01:12:51.536616 | orchestrator | Monday 02 June 2025 01:12:51 +0000 (0:00:01.498) 0:00:11.076 *********** 2025-06-02 01:12:51.536635 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:51.536653 | orchestrator | 2025-06-02 01:12:51.536664 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-02 01:12:51.536681 | orchestrator | Monday 02 June 2025 01:12:51 +0000 (0:00:00.119) 0:00:11.196 *********** 2025-06-02 01:12:51.536699 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:12:51.536717 | orchestrator | 2025-06-02 01:12:51.536748 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-02 01:12:51.536767 | orchestrator | Monday 02 June 2025 01:12:51 +0000 (0:00:00.292) 0:00:11.488 *********** 2025-06-02 01:12:51.536796 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:13:03.501453 | orchestrator | 2025-06-02 01:13:03.501576 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-02 01:13:03.501594 | orchestrator | Monday 02 June 2025 01:12:51 +0000 (0:00:00.115) 0:00:11.604 *********** 2025-06-02 01:13:03.501606 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:13:03.501619 | orchestrator | 2025-06-02 01:13:03.501630 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 01:13:03.501642 | orchestrator | Monday 02 June 2025 01:12:51 +0000 (0:00:00.124) 0:00:11.728 *********** 2025-06-02 01:13:03.501653 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:13:03.501664 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:13:03.501675 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:13:03.501685 | orchestrator | 2025-06-02 01:13:03.501696 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-02 01:13:03.501735 | orchestrator | Monday 02 June 2025 01:12:51 +0000 (0:00:00.273) 0:00:12.002 *********** 2025-06-02 01:13:03.501747 | orchestrator | changed: [testbed-node-3] 2025-06-02 01:13:03.501764 | orchestrator | changed: [testbed-node-5] 2025-06-02 01:13:03.501783 | orchestrator | changed: [testbed-node-4] 2025-06-02 01:13:03.501801 | orchestrator | 2025-06-02 01:13:03.501864 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-02 01:13:03.501883 | orchestrator | Monday 02 June 2025 01:12:54 +0000 (0:00:02.393) 0:00:14.395 *********** 2025-06-02 01:13:03.501900 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:13:03.501919 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:13:03.501938 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:13:03.501956 | orchestrator | 2025-06-02 01:13:03.501973 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-02 01:13:03.501986 | orchestrator | Monday 02 June 2025 01:12:54 +0000 (0:00:00.294) 0:00:14.690 *********** 2025-06-02 01:13:03.502004 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:13:03.502103 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:13:03.502129 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:13:03.502148 | orchestrator | 2025-06-02 01:13:03.502168 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-02 01:13:03.502187 | orchestrator | Monday 02 June 2025 01:12:55 +0000 (0:00:00.598) 0:00:15.288 *********** 2025-06-02 01:13:03.502268 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:13:03.502291 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:13:03.502310 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:13:03.502328 | orchestrator | 2025-06-02 01:13:03.502347 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-02 01:13:03.502366 | orchestrator | Monday 02 June 2025 01:12:55 +0000 (0:00:00.320) 0:00:15.608 *********** 2025-06-02 01:13:03.502385 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:13:03.502404 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:13:03.502422 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:13:03.502441 | orchestrator | 2025-06-02 01:13:03.502460 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-02 01:13:03.502477 | orchestrator | Monday 02 June 2025 01:12:55 +0000 (0:00:00.453) 0:00:16.062 *********** 2025-06-02 01:13:03.502505 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:13:03.502525 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:13:03.502544 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:13:03.502563 | orchestrator | 2025-06-02 01:13:03.502653 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-02 01:13:03.502678 | orchestrator | Monday 02 June 2025 01:12:56 +0000 (0:00:00.272) 0:00:16.334 *********** 2025-06-02 01:13:03.502697 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:13:03.502718 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:13:03.502739 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:13:03.502759 | orchestrator | 2025-06-02 01:13:03.502776 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 01:13:03.502787 | orchestrator | Monday 02 June 2025 01:12:56 +0000 (0:00:00.263) 0:00:16.598 *********** 2025-06-02 01:13:03.502797 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:13:03.502808 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:13:03.502845 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:13:03.502856 | orchestrator | 2025-06-02 01:13:03.502867 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-02 01:13:03.502878 | orchestrator | Monday 02 June 2025 01:12:56 +0000 (0:00:00.457) 0:00:17.055 *********** 2025-06-02 01:13:03.502888 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:13:03.502899 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:13:03.502910 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:13:03.502923 | orchestrator | 2025-06-02 01:13:03.502942 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-02 01:13:03.502960 | orchestrator | Monday 02 June 2025 01:12:57 +0000 (0:00:00.652) 0:00:17.707 *********** 2025-06-02 01:13:03.502995 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:13:03.503013 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:13:03.503031 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:13:03.503049 | orchestrator | 2025-06-02 01:13:03.503067 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-02 01:13:03.503084 | orchestrator | Monday 02 June 2025 01:12:57 +0000 (0:00:00.296) 0:00:18.003 *********** 2025-06-02 01:13:03.503100 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:13:03.503117 | orchestrator | skipping: [testbed-node-4] 2025-06-02 01:13:03.503134 | orchestrator | skipping: [testbed-node-5] 2025-06-02 01:13:03.503151 | orchestrator | 2025-06-02 01:13:03.503170 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-02 01:13:03.503187 | orchestrator | Monday 02 June 2025 01:12:58 +0000 (0:00:00.281) 0:00:18.285 *********** 2025-06-02 01:13:03.503204 | orchestrator | ok: [testbed-node-3] 2025-06-02 01:13:03.503221 | orchestrator | ok: [testbed-node-4] 2025-06-02 01:13:03.503239 | orchestrator | ok: [testbed-node-5] 2025-06-02 01:13:03.503256 | orchestrator | 2025-06-02 01:13:03.503273 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 01:13:03.503291 | orchestrator | Monday 02 June 2025 01:12:58 +0000 (0:00:00.287) 0:00:18.573 *********** 2025-06-02 01:13:03.503308 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 01:13:03.503325 | orchestrator | 2025-06-02 01:13:03.503355 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 01:13:03.503373 | orchestrator | Monday 02 June 2025 01:12:59 +0000 (0:00:00.670) 0:00:19.243 *********** 2025-06-02 01:13:03.503390 | orchestrator | skipping: [testbed-node-3] 2025-06-02 01:13:03.503408 | orchestrator | 2025-06-02 01:13:03.503456 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 01:13:03.503475 | orchestrator | Monday 02 June 2025 01:12:59 +0000 (0:00:00.242) 0:00:19.485 *********** 2025-06-02 01:13:03.503494 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 01:13:03.503512 | orchestrator | 2025-06-02 01:13:03.503530 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 01:13:03.503542 | orchestrator | Monday 02 June 2025 01:13:01 +0000 (0:00:01.592) 0:00:21.078 *********** 2025-06-02 01:13:03.503553 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 01:13:03.503564 | orchestrator | 2025-06-02 01:13:03.503574 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 01:13:03.503585 | orchestrator | Monday 02 June 2025 01:13:01 +0000 (0:00:00.241) 0:00:21.319 *********** 2025-06-02 01:13:03.503596 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 01:13:03.503607 | orchestrator | 2025-06-02 01:13:03.503618 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:13:03.503629 | orchestrator | Monday 02 June 2025 01:13:01 +0000 (0:00:00.252) 0:00:21.572 *********** 2025-06-02 01:13:03.503648 | orchestrator | 2025-06-02 01:13:03.503666 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:13:03.503684 | orchestrator | Monday 02 June 2025 01:13:01 +0000 (0:00:00.069) 0:00:21.641 *********** 2025-06-02 01:13:03.503701 | orchestrator | 2025-06-02 01:13:03.503719 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 01:13:03.503743 | orchestrator | Monday 02 June 2025 01:13:01 +0000 (0:00:00.065) 0:00:21.707 *********** 2025-06-02 01:13:03.503766 | orchestrator | 2025-06-02 01:13:03.503784 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 01:13:03.503802 | orchestrator | Monday 02 June 2025 01:13:01 +0000 (0:00:00.068) 0:00:21.775 *********** 2025-06-02 01:13:03.503853 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 01:13:03.503873 | orchestrator | 2025-06-02 01:13:03.503891 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 01:13:03.503907 | orchestrator | Monday 02 June 2025 01:13:02 +0000 (0:00:01.224) 0:00:23.000 *********** 2025-06-02 01:13:03.503931 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-02 01:13:03.503943 | orchestrator |  "msg": [ 2025-06-02 01:13:03.503955 | orchestrator |  "Validator run completed.", 2025-06-02 01:13:03.503966 | orchestrator |  "You can find the report file here:", 2025-06-02 01:13:03.503977 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-02T01:12:40+00:00-report.json", 2025-06-02 01:13:03.503989 | orchestrator |  "on the following host:", 2025-06-02 01:13:03.504000 | orchestrator |  "testbed-manager" 2025-06-02 01:13:03.504011 | orchestrator |  ] 2025-06-02 01:13:03.504022 | orchestrator | } 2025-06-02 01:13:03.504033 | orchestrator | 2025-06-02 01:13:03.504044 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 01:13:03.504056 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-02 01:13:03.504069 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 01:13:03.504080 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 01:13:03.504090 | orchestrator | 2025-06-02 01:13:03.504101 | orchestrator | 2025-06-02 01:13:03.504112 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 01:13:03.504123 | orchestrator | Monday 02 June 2025 01:13:03 +0000 (0:00:00.536) 0:00:23.536 *********** 2025-06-02 01:13:03.504133 | orchestrator | =============================================================================== 2025-06-02 01:13:03.504144 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.39s 2025-06-02 01:13:03.504155 | orchestrator | Aggregate test results step one ----------------------------------------- 1.59s 2025-06-02 01:13:03.504165 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.50s 2025-06-02 01:13:03.504176 | orchestrator | Write report file ------------------------------------------------------- 1.22s 2025-06-02 01:13:03.504186 | orchestrator | Create report output directory ------------------------------------------ 0.76s 2025-06-02 01:13:03.504196 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.67s 2025-06-02 01:13:03.504205 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.65s 2025-06-02 01:13:03.504215 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.62s 2025-06-02 01:13:03.504224 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.60s 2025-06-02 01:13:03.504233 | orchestrator | Aggregate test results step one ----------------------------------------- 0.59s 2025-06-02 01:13:03.504243 | orchestrator | Print report file information ------------------------------------------- 0.54s 2025-06-02 01:13:03.504252 | orchestrator | Get timestamp for report file ------------------------------------------- 0.51s 2025-06-02 01:13:03.504265 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.48s 2025-06-02 01:13:03.504281 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.47s 2025-06-02 01:13:03.504308 | orchestrator | Prepare test data ------------------------------------------------------- 0.46s 2025-06-02 01:13:03.504331 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.45s 2025-06-02 01:13:03.504359 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.43s 2025-06-02 01:13:03.736747 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.39s 2025-06-02 01:13:03.736900 | orchestrator | Prepare test data ------------------------------------------------------- 0.37s 2025-06-02 01:13:03.736915 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.32s 2025-06-02 01:13:03.968700 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-02 01:13:03.976266 | orchestrator | + set -e 2025-06-02 01:13:03.976401 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 01:13:03.976414 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 01:13:03.976425 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 01:13:03.976436 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 01:13:03.976447 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 01:13:03.976458 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 01:13:03.976470 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 01:13:03.976481 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 01:13:03.976492 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 01:13:03.976503 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 01:13:03.976514 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 01:13:03.976525 | orchestrator | ++ export ARA=false 2025-06-02 01:13:03.976536 | orchestrator | ++ ARA=false 2025-06-02 01:13:03.976547 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 01:13:03.976558 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 01:13:03.976569 | orchestrator | ++ export TEMPEST=false 2025-06-02 01:13:03.976579 | orchestrator | ++ TEMPEST=false 2025-06-02 01:13:03.976590 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 01:13:03.976601 | orchestrator | ++ IS_ZUUL=true 2025-06-02 01:13:03.976611 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2025-06-02 01:13:03.976622 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2025-06-02 01:13:03.976633 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 01:13:03.976643 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 01:13:03.976654 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 01:13:03.976665 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 01:13:03.976675 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 01:13:03.976686 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 01:13:03.976697 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 01:13:03.976707 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 01:13:03.976718 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 01:13:03.976728 | orchestrator | + source /etc/os-release 2025-06-02 01:13:03.976739 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-02 01:13:03.976750 | orchestrator | ++ NAME=Ubuntu 2025-06-02 01:13:03.976761 | orchestrator | ++ VERSION_ID=24.04 2025-06-02 01:13:03.976772 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-02 01:13:03.976783 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-02 01:13:03.976801 | orchestrator | ++ ID=ubuntu 2025-06-02 01:13:03.976833 | orchestrator | ++ ID_LIKE=debian 2025-06-02 01:13:03.976845 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-02 01:13:03.976856 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-02 01:13:03.976867 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-02 01:13:03.976878 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-02 01:13:03.976889 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-02 01:13:03.976900 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-02 01:13:03.976910 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-02 01:13:03.976923 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-02 01:13:03.976935 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-02 01:13:04.000987 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-02 01:13:24.074535 | orchestrator | 2025-06-02 01:13:24.074669 | orchestrator | # Status of Elasticsearch 2025-06-02 01:13:24.074687 | orchestrator | 2025-06-02 01:13:24.074700 | orchestrator | + pushd /opt/configuration/contrib 2025-06-02 01:13:24.074713 | orchestrator | + echo 2025-06-02 01:13:24.074724 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-02 01:13:24.074736 | orchestrator | + echo 2025-06-02 01:13:24.074747 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-02 01:13:24.177056 | orchestrator | CRITICAL - Could not connect to server api-int.testbed.osism.xyz 2025-06-02 01:13:24.481974 | orchestrator | ERROR 2025-06-02 01:13:24.482410 | orchestrator | { 2025-06-02 01:13:24.482508 | orchestrator | "delta": "0:02:11.538472", 2025-06-02 01:13:24.482562 | orchestrator | "end": "2025-06-02 01:13:24.187463", 2025-06-02 01:13:24.482618 | orchestrator | "msg": "non-zero return code", 2025-06-02 01:13:24.482695 | orchestrator | "rc": 2, 2025-06-02 01:13:24.482775 | orchestrator | "start": "2025-06-02 01:11:12.648991" 2025-06-02 01:13:24.482816 | orchestrator | } failure 2025-06-02 01:13:24.527469 | 2025-06-02 01:13:24.527719 | PLAY RECAP 2025-06-02 01:13:24.528043 | orchestrator | ok: 23 changed: 10 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-06-02 01:13:24.528232 | 2025-06-02 01:13:24.768913 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-02 01:13:24.770326 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 01:13:25.507146 | 2025-06-02 01:13:25.507375 | PLAY [Post output play] 2025-06-02 01:13:25.524033 | 2025-06-02 01:13:25.524172 | LOOP [stage-output : Register sources] 2025-06-02 01:13:25.594330 | 2025-06-02 01:13:25.594673 | TASK [stage-output : Check sudo] 2025-06-02 01:13:26.480000 | orchestrator | sudo: a password is required 2025-06-02 01:13:26.635881 | orchestrator | ok: Runtime: 0:00:00.016137 2025-06-02 01:13:26.651389 | 2025-06-02 01:13:26.651555 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-02 01:13:26.688933 | 2025-06-02 01:13:26.689246 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-02 01:13:26.768427 | orchestrator | ok 2025-06-02 01:13:26.777261 | 2025-06-02 01:13:26.777404 | LOOP [stage-output : Ensure target folders exist] 2025-06-02 01:13:27.218422 | orchestrator | ok: "docs" 2025-06-02 01:13:27.218769 | 2025-06-02 01:13:27.470096 | orchestrator | ok: "artifacts" 2025-06-02 01:13:27.725833 | orchestrator | ok: "logs" 2025-06-02 01:13:27.748604 | 2025-06-02 01:13:27.748841 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-02 01:13:27.788033 | 2025-06-02 01:13:27.788358 | TASK [stage-output : Make all log files readable] 2025-06-02 01:13:28.082724 | orchestrator | ok 2025-06-02 01:13:28.093609 | 2025-06-02 01:13:28.093818 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-02 01:13:28.129842 | orchestrator | skipping: Conditional result was False 2025-06-02 01:13:28.139608 | 2025-06-02 01:13:28.139728 | TASK [stage-output : Discover log files for compression] 2025-06-02 01:13:28.154106 | orchestrator | skipping: Conditional result was False 2025-06-02 01:13:28.168124 | 2025-06-02 01:13:28.168342 | LOOP [stage-output : Archive everything from logs] 2025-06-02 01:13:28.217713 | 2025-06-02 01:13:28.217969 | PLAY [Post cleanup play] 2025-06-02 01:13:28.226512 | 2025-06-02 01:13:28.226640 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 01:13:28.312136 | orchestrator | ok 2025-06-02 01:13:28.323989 | 2025-06-02 01:13:28.324124 | TASK [Set cloud fact (local deployment)] 2025-06-02 01:13:28.359688 | orchestrator | skipping: Conditional result was False 2025-06-02 01:13:28.374482 | 2025-06-02 01:13:28.374641 | TASK [Clean the cloud environment] 2025-06-02 01:13:28.964840 | orchestrator | 2025-06-02 01:13:28 - clean up servers 2025-06-02 01:13:29.693290 | orchestrator | 2025-06-02 01:13:29 - testbed-manager 2025-06-02 01:13:29.778623 | orchestrator | 2025-06-02 01:13:29 - testbed-node-1 2025-06-02 01:13:29.867678 | orchestrator | 2025-06-02 01:13:29 - testbed-node-0 2025-06-02 01:13:29.967300 | orchestrator | 2025-06-02 01:13:29 - testbed-node-2 2025-06-02 01:13:30.064848 | orchestrator | 2025-06-02 01:13:30 - testbed-node-5 2025-06-02 01:13:30.160458 | orchestrator | 2025-06-02 01:13:30 - testbed-node-3 2025-06-02 01:13:30.251870 | orchestrator | 2025-06-02 01:13:30 - testbed-node-4 2025-06-02 01:13:30.340400 | orchestrator | 2025-06-02 01:13:30 - clean up keypairs 2025-06-02 01:13:30.361006 | orchestrator | 2025-06-02 01:13:30 - testbed 2025-06-02 01:13:30.387721 | orchestrator | 2025-06-02 01:13:30 - wait for servers to be gone 2025-06-02 01:13:39.241328 | orchestrator | 2025-06-02 01:13:39 - clean up ports 2025-06-02 01:13:39.432027 | orchestrator | 2025-06-02 01:13:39 - 18c9061f-8864-4822-ad2a-d6fe972e306b 2025-06-02 01:13:39.732459 | orchestrator | 2025-06-02 01:13:39 - 7a1fb132-248a-42ff-bd54-b74f9848f96a 2025-06-02 01:13:39.973701 | orchestrator | 2025-06-02 01:13:39 - 811287d4-abe9-434d-9f64-1ee7a0bcda0d 2025-06-02 01:13:40.221438 | orchestrator | 2025-06-02 01:13:40 - 8cf5d7a8-1702-4d98-b331-f75cc7889911 2025-06-02 01:13:40.714729 | orchestrator | 2025-06-02 01:13:40 - d0b04ae8-6fb6-4851-bb44-abfa89bb64b5 2025-06-02 01:13:40.933366 | orchestrator | 2025-06-02 01:13:40 - f832cf9b-a7ba-4816-bd2e-f0a44663791d 2025-06-02 01:13:41.165365 | orchestrator | 2025-06-02 01:13:41 - fbcf522b-d160-4b05-943d-25d5083a9185 2025-06-02 01:13:41.389066 | orchestrator | 2025-06-02 01:13:41 - clean up volumes 2025-06-02 01:13:41.496302 | orchestrator | 2025-06-02 01:13:41 - testbed-volume-manager-base 2025-06-02 01:13:41.534629 | orchestrator | 2025-06-02 01:13:41 - testbed-volume-3-node-base 2025-06-02 01:13:41.579472 | orchestrator | 2025-06-02 01:13:41 - testbed-volume-2-node-base 2025-06-02 01:13:41.620672 | orchestrator | 2025-06-02 01:13:41 - testbed-volume-5-node-base 2025-06-02 01:13:41.665971 | orchestrator | 2025-06-02 01:13:41 - testbed-volume-1-node-base 2025-06-02 01:13:41.713252 | orchestrator | 2025-06-02 01:13:41 - testbed-volume-4-node-base 2025-06-02 01:13:41.755055 | orchestrator | 2025-06-02 01:13:41 - testbed-volume-0-node-base 2025-06-02 01:13:41.797253 | orchestrator | 2025-06-02 01:13:41 - testbed-volume-0-node-3 2025-06-02 01:13:41.843643 | orchestrator | 2025-06-02 01:13:41 - testbed-volume-1-node-4 2025-06-02 01:13:41.886272 | orchestrator | 2025-06-02 01:13:41 - testbed-volume-2-node-5 2025-06-02 01:13:41.929934 | orchestrator | 2025-06-02 01:13:41 - testbed-volume-8-node-5 2025-06-02 01:13:41.976279 | orchestrator | 2025-06-02 01:13:41 - testbed-volume-6-node-3 2025-06-02 01:13:42.020236 | orchestrator | 2025-06-02 01:13:42 - testbed-volume-5-node-5 2025-06-02 01:13:42.061939 | orchestrator | 2025-06-02 01:13:42 - testbed-volume-3-node-3 2025-06-02 01:13:42.115562 | orchestrator | 2025-06-02 01:13:42 - testbed-volume-4-node-4 2025-06-02 01:13:42.161762 | orchestrator | 2025-06-02 01:13:42 - testbed-volume-7-node-4 2025-06-02 01:13:42.201508 | orchestrator | 2025-06-02 01:13:42 - disconnect routers 2025-06-02 01:13:42.316627 | orchestrator | 2025-06-02 01:13:42 - testbed 2025-06-02 01:13:43.283365 | orchestrator | 2025-06-02 01:13:43 - clean up subnets 2025-06-02 01:13:43.326141 | orchestrator | 2025-06-02 01:13:43 - subnet-testbed-management 2025-06-02 01:13:43.497503 | orchestrator | 2025-06-02 01:13:43 - clean up networks 2025-06-02 01:13:43.670614 | orchestrator | 2025-06-02 01:13:43 - net-testbed-management 2025-06-02 01:13:43.946238 | orchestrator | 2025-06-02 01:13:43 - clean up security groups 2025-06-02 01:13:43.989296 | orchestrator | 2025-06-02 01:13:43 - testbed-management 2025-06-02 01:13:44.100736 | orchestrator | 2025-06-02 01:13:44 - testbed-node 2025-06-02 01:13:44.211167 | orchestrator | 2025-06-02 01:13:44 - clean up floating ips 2025-06-02 01:13:44.245219 | orchestrator | 2025-06-02 01:13:44 - 81.163.192.203 2025-06-02 01:13:44.609998 | orchestrator | 2025-06-02 01:13:44 - clean up routers 2025-06-02 01:13:44.715316 | orchestrator | 2025-06-02 01:13:44 - testbed 2025-06-02 01:13:45.932407 | orchestrator | ok: Runtime: 0:00:16.857156 2025-06-02 01:13:45.936087 | 2025-06-02 01:13:45.936234 | PLAY RECAP 2025-06-02 01:13:45.936346 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-02 01:13:45.936401 | 2025-06-02 01:13:46.084571 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 01:13:46.085757 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 01:13:46.866300 | 2025-06-02 01:13:46.866469 | PLAY [Cleanup play] 2025-06-02 01:13:46.882716 | 2025-06-02 01:13:46.882902 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 01:13:46.923807 | orchestrator | ok 2025-06-02 01:13:46.930904 | 2025-06-02 01:13:46.931059 | TASK [Set cloud fact (local deployment)] 2025-06-02 01:13:46.965794 | orchestrator | skipping: Conditional result was False 2025-06-02 01:13:46.986919 | 2025-06-02 01:13:46.987131 | TASK [Clean the cloud environment] 2025-06-02 01:13:48.169729 | orchestrator | 2025-06-02 01:13:48 - clean up servers 2025-06-02 01:13:48.638866 | orchestrator | 2025-06-02 01:13:48 - clean up keypairs 2025-06-02 01:13:48.655394 | orchestrator | 2025-06-02 01:13:48 - wait for servers to be gone 2025-06-02 01:13:48.696573 | orchestrator | 2025-06-02 01:13:48 - clean up ports 2025-06-02 01:13:48.775428 | orchestrator | 2025-06-02 01:13:48 - clean up volumes 2025-06-02 01:13:48.846225 | orchestrator | 2025-06-02 01:13:48 - disconnect routers 2025-06-02 01:13:48.872780 | orchestrator | 2025-06-02 01:13:48 - clean up subnets 2025-06-02 01:13:48.893080 | orchestrator | 2025-06-02 01:13:48 - clean up networks 2025-06-02 01:13:49.051649 | orchestrator | 2025-06-02 01:13:49 - clean up security groups 2025-06-02 01:13:49.086793 | orchestrator | 2025-06-02 01:13:49 - clean up floating ips 2025-06-02 01:13:49.111323 | orchestrator | 2025-06-02 01:13:49 - clean up routers 2025-06-02 01:13:49.532302 | orchestrator | ok: Runtime: 0:00:01.334184 2025-06-02 01:13:49.534368 | 2025-06-02 01:13:49.534461 | PLAY RECAP 2025-06-02 01:13:49.534521 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-02 01:13:49.534546 | 2025-06-02 01:13:49.674414 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 01:13:49.675572 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 01:13:50.489332 | 2025-06-02 01:13:50.489534 | PLAY [Base post-fetch] 2025-06-02 01:13:50.505946 | 2025-06-02 01:13:50.506093 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-02 01:13:50.562413 | orchestrator | skipping: Conditional result was False 2025-06-02 01:13:50.569881 | 2025-06-02 01:13:50.570072 | TASK [fetch-output : Set log path for single node] 2025-06-02 01:13:50.613138 | orchestrator | ok 2025-06-02 01:13:50.621710 | 2025-06-02 01:13:50.621892 | LOOP [fetch-output : Ensure local output dirs] 2025-06-02 01:13:51.159515 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/6866d38e26b6403fad245960ab6da0bc/work/logs" 2025-06-02 01:13:51.437260 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6866d38e26b6403fad245960ab6da0bc/work/artifacts" 2025-06-02 01:13:51.720933 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6866d38e26b6403fad245960ab6da0bc/work/docs" 2025-06-02 01:13:51.742477 | 2025-06-02 01:13:51.742646 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-02 01:13:52.675613 | orchestrator | changed: .d..t...... ./ 2025-06-02 01:13:52.675942 | orchestrator | changed: All items complete 2025-06-02 01:13:52.675981 | 2025-06-02 01:13:53.456107 | orchestrator | changed: .d..t...... ./ 2025-06-02 01:13:54.204023 | orchestrator | changed: .d..t...... ./ 2025-06-02 01:13:54.231456 | 2025-06-02 01:13:54.231596 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-02 01:13:54.269563 | orchestrator | skipping: Conditional result was False 2025-06-02 01:13:54.272795 | orchestrator | skipping: Conditional result was False 2025-06-02 01:13:54.286707 | 2025-06-02 01:13:54.286996 | PLAY RECAP 2025-06-02 01:13:54.287249 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-02 01:13:54.287337 | 2025-06-02 01:13:54.434881 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 01:13:54.437332 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 01:13:55.239599 | 2025-06-02 01:13:55.239833 | PLAY [Base post] 2025-06-02 01:13:55.255529 | 2025-06-02 01:13:55.255697 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-02 01:13:56.273244 | orchestrator | changed 2025-06-02 01:13:56.283580 | 2025-06-02 01:13:56.283706 | PLAY RECAP 2025-06-02 01:13:56.283811 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-02 01:13:56.283892 | 2025-06-02 01:13:56.406351 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 01:13:56.407418 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-02 01:13:57.238080 | 2025-06-02 01:13:57.238265 | PLAY [Base post-logs] 2025-06-02 01:13:57.249288 | 2025-06-02 01:13:57.249433 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-02 01:13:57.733268 | localhost | changed 2025-06-02 01:13:57.744032 | 2025-06-02 01:13:57.744193 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-02 01:13:57.781091 | localhost | ok 2025-06-02 01:13:57.784924 | 2025-06-02 01:13:57.785045 | TASK [Set zuul-log-path fact] 2025-06-02 01:13:57.812356 | localhost | ok 2025-06-02 01:13:57.825175 | 2025-06-02 01:13:57.825315 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 01:13:57.863123 | localhost | ok 2025-06-02 01:13:57.869740 | 2025-06-02 01:13:57.869923 | TASK [upload-logs : Create log directories] 2025-06-02 01:13:58.402138 | localhost | changed 2025-06-02 01:13:58.407965 | 2025-06-02 01:13:58.408151 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-02 01:13:58.933082 | localhost -> localhost | ok: Runtime: 0:00:00.007177 2025-06-02 01:13:58.943858 | 2025-06-02 01:13:58.944147 | TASK [upload-logs : Upload logs to log server] 2025-06-02 01:13:59.546110 | localhost | Output suppressed because no_log was given 2025-06-02 01:13:59.550520 | 2025-06-02 01:13:59.550803 | LOOP [upload-logs : Compress console log and json output] 2025-06-02 01:13:59.610615 | localhost | skipping: Conditional result was False 2025-06-02 01:13:59.615373 | localhost | skipping: Conditional result was False 2025-06-02 01:13:59.628895 | 2025-06-02 01:13:59.629135 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-02 01:13:59.689912 | localhost | skipping: Conditional result was False 2025-06-02 01:13:59.690798 | 2025-06-02 01:13:59.694348 | localhost | skipping: Conditional result was False 2025-06-02 01:13:59.701941 | 2025-06-02 01:13:59.702193 | LOOP [upload-logs : Upload console log and json output]